Merge pull request 'fix-socials' (#3) from fix-socials into master
All checks were successful
Build and Push Image / Build and push image (push) Successful in 10m5s
Reviewed-on: #3
3
.gitignore
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
*__pycache__*
|
||||
/src/output/*
|
||||
.venv
|
@ -1,182 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/data-engineering.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
|
||||
|
||||
|
||||
<meta name="tags" contents="data engineering" />
|
||||
<meta name="tags" contents="Amazon" />
|
||||
<meta name="tags" contents="Managed Services" />
|
||||
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
|
||||
<meta property="og:type" content="article">
|
||||
<meta property="article:author" content="">
|
||||
<meta property="og:url" content="http://localhost:8000/appflow-production.html">
|
||||
<meta property="og:title" content="Implementing Appflow in a Production Datalake">
|
||||
<meta property="og:description" content="">
|
||||
<meta property="og:image" content="http://localhost:8000/">
|
||||
<meta property="article:published_time" content="2023-05-23 20:00:00+10:00">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('http://localhost:8000/theme/images/post-bg.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Implementing Appflow in a Production Datalake</h1>
|
||||
<span class="meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Tue 23 May 2023
|
||||
</span>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<!-- Post Content -->
|
||||
<article>
|
||||
<p>I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called <a href="https://aws.amazon.com/appflow/">Amazon Appflow</a>. This product <em>claimed</em> to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.</p>
|
||||
<p>This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. </p>
|
||||
<p>Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.</p>
|
||||
<h3>Datalake Extraction Layer</h3>
|
||||
<p>I often find that the flakiest part of any data solution, or at least a data solution that consumes data other applications create, is the extraction layer. If you are going to get a bug its going to be here, not always, but in my experience first port of call is... did it load :/ </p>
|
||||
<p>It is why I believe one of the most saturated parts of the enterprise data market is in fact the extraction layer. It seems every man and his dog (not to mention start up ) seems to be trying to "solve" this problem. The result is often that, as a data architect, you are spoilt for choice. BUT it seems that every different type of connection requires a different extractor, all for varying costs and with varying success. </p>
|
||||
<p>The RDBMS extraction space is largely solved, and there are products like <a href="https://www.qlik.com/us/products/qlik-replicate">Qlick replicate</a>, or <a href="https://aws.amazon.com/dms/">AWS DMS</a> as well as countless others that can do this at the CDC level and the work relatively well, albeit at a considerable cost. </p>
|
||||
<p>The API landscape for extraction is particularly saturated. I believe I saw on linkedin a graphic showing no less than 50 companies offering extraction from API endpoints, I'm not offey with all of them but they largely seem to <em>claim</em> to achieve the same goal, with varying levels of depth.</p>
|
||||
<p>This proliferation of API extractors obviously coinccides with the proliferation of SAAS products taking over from bespoke software that enterprises would have once ran with, hooked up to their existing enterprise DB's and used. This new landscape seems also shows that rather than an enterprise owning there data, they often need the skills, and increasingly $$$'s to access it.</p>
|
||||
<p>This complexity for access is normally coupled with poor documentation, where its a crapshoot as to whether there is an swaggerui, let alone useful API documentation (this is getting better though)</p>
|
||||
<h3>So why Managed for Extraction?</h3>
|
||||
<p>As you see above when you're extracting data it is so often a crapshoot and writing something bespoke is so incrediblly risky that the idea of it gives me hives. I could write a containerised python function for each of my API extractions, or a small batch loader for RDBMS myself and have a small cluster of these things extracting from tables and API endpoints but the thought of managing all of that, especially in a 1 man DataOps team is far to overwhelming.</p>
|
||||
<p>And Right there is my criteria for choosing a managed server.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p>Do I want to manage this myself?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is there any benefit to me managing this?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is it more cost effective to have someone else manage it?</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Invariably, the extraction layer, at least when answering the questions above, gives me the irks and I just decide to run with a simple managed service where I can point at the source and target click go and watch it go brrrrrrrrrrrrr</p>
|
||||
<p>When you couple ease of use with the relative reliability the value proposition of designing bespoke applications for the extraction task rapidly decreases, at least for me</p>
|
||||
<p>And this is why Extraction, at least in systems I design, is more often than not handled by a managed service, and why AppFlow, with the concept of a managed service for API calls to s3, was a cool tech I had to swing a chance to play with.</p>
|
||||
<h3>AppFlow, The Good, The Bad, The Ugly</h3>
|
||||
<p>Using AppFlow turned out to be a largely simple affair, even in Terraform, Once you have the correct Authentication tokens its more or less select the service you want and then create a "flow" for each endpoint. The complex part is the "Map_All" function for the endpoint. When triggered it automtically create a 1 - 1 mapping for all fields in the endpoint into the target file (in my case parquet) BUT this actually fundamentaly changes the flow you have created and thus causes terraform to shit the bed. This can be dealt with via a lifecycle rule, but means schema changes in the endpoint could cause issues in the future. </p>
|
||||
<p>All in All having a Managed Service to manage API endpoint extraction has been great and enabled the expansion of a datalake with no bespoke application code to manage the extraction of information from API endpoints which has proved to be a massive time and money saver overall</p>
|
||||
<p>I am yet to play with establishing a custom endpoint and it will be interesting to see just how much work this is compared with writing the code for a bespoke application... sounds like a good blog post if I get to do it one day.</p>
|
||||
</article>
|
||||
|
||||
<hr>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,138 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Archives</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Archives for Andrew Ridgway's Blog</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<dl>
|
||||
<dt>Wed 24 July 2024</dt>
|
||||
<dd><a href="http://localhost:8000/proxmox-cluster-1.html">Building a 5 node Proxmox cluster!</a></dd>
|
||||
<dt>Fri 23 February 2024</dt>
|
||||
<dd><a href="http://localhost:8000/cover-letter.html">A Cover Letter</a></dd>
|
||||
<dt>Fri 23 February 2024</dt>
|
||||
<dd><a href="http://localhost:8000/resume.html">A Resume</a></dd>
|
||||
<dt>Wed 15 November 2023</dt>
|
||||
<dd><a href="http://localhost:8000/metabase-duckdb.html">Metabase and DuckDB</a></dd>
|
||||
<dt>Tue 23 May 2023</dt>
|
||||
<dd><a href="http://localhost:8000/appflow-production.html">Implementing Appflow in a Production Datalake</a></dd>
|
||||
<dt>Wed 10 May 2023</dt>
|
||||
<dd><a href="http://localhost:8000/how-i-built-the-damn-thing.html">Dawn of another blog attempt</a></dd>
|
||||
<hr>
|
||||
</dl>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,209 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Articles by Andrew Ridgway</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Articles by Andrew Ridgway</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/proxmox-cluster-1.html" rel="bookmark" title="Permalink to Building a 5 node Proxmox cluster!">
|
||||
<h2 class="post-title">
|
||||
Building a 5 node Proxmox cluster!
|
||||
</h2>
|
||||
</a>
|
||||
<p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 24 July 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/cover-letter.html" rel="bookmark" title="Permalink to A Cover Letter">
|
||||
<h2 class="post-title">
|
||||
A Cover Letter
|
||||
</h2>
|
||||
</a>
|
||||
<p>A Summary of what I've done and Where I'd like to go for prospective Employers</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Fri 23 February 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/resume.html" rel="bookmark" title="Permalink to A Resume">
|
||||
<h2 class="post-title">
|
||||
A Resume
|
||||
</h2>
|
||||
</a>
|
||||
<p>A Summary of My work Experience</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Fri 23 February 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/metabase-duckdb.html" rel="bookmark" title="Permalink to Metabase and DuckDB">
|
||||
<h2 class="post-title">
|
||||
Metabase and DuckDB
|
||||
</h2>
|
||||
</a>
|
||||
<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 15 November 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implementing Appflow in a Production Datalake">
|
||||
<h2 class="post-title">
|
||||
Implementing Appflow in a Production Datalake
|
||||
</h2>
|
||||
</a>
|
||||
<p>How Appflow simplified a major extract layer and when I choose Managed Services</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Tue 23 May 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/how-i-built-the-damn-thing.html" rel="bookmark" title="Permalink to Dawn of another blog attempt">
|
||||
<h2 class="post-title">
|
||||
Dawn of another blog attempt
|
||||
</h2>
|
||||
</a>
|
||||
<p>Containers and How I take my learnings from home and apply them to work</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 10 May 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<!-- Pager -->
|
||||
<ul class="pager">
|
||||
<li class="next">
|
||||
</li>
|
||||
</ul>
|
||||
Page 1 / 1
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,131 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Authors</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Articles by </h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html" rel="bookmark">
|
||||
<h2 class="post-title">
|
||||
Andrew Ridgway (6)
|
||||
</h2>
|
||||
</a>
|
||||
</div>
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,129 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Categories</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Andrew Ridgway's Blog - Categories</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul>
|
||||
<li><a href="http://localhost:8000/category/business-intelligence.html">Business Intelligence</a></li>
|
||||
<li><a href="http://localhost:8000/category/data-engineering.html">Data Engineering</a></li>
|
||||
<li><a href="http://localhost:8000/category/resume.html">Resume</a></li>
|
||||
<li><a href="http://localhost:8000/category/server-architecture.html">Server Architecture</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,145 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Articles in the Business Intelligence category</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/business-intelligence.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Articles in the Business Intelligence category</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/metabase-duckdb.html" rel="bookmark" title="Permalink to Metabase and DuckDB">
|
||||
<h2 class="post-title">
|
||||
Metabase and DuckDB
|
||||
</h2>
|
||||
</a>
|
||||
<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 15 November 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<!-- Pager -->
|
||||
<ul class="pager">
|
||||
<li class="next">
|
||||
</li>
|
||||
</ul>
|
||||
Page 1 / 1
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,165 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Articles in the Data Analytics category</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/data-analytics.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Articles in the Data Analytics category</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/notebook-or-bi.html" rel="bookmark" title="Permalink to Notebook or BI, What is the most appropiate communication medium">
|
||||
<h2 class="post-title">
|
||||
Notebook or BI, What is the most appropiate communication medium
|
||||
</h2>
|
||||
</a>
|
||||
<p>When is a notebook enough or when do we need a dashboard</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Thu 13 July 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<!-- Pager -->
|
||||
<ul class="pager">
|
||||
<li class="next">
|
||||
</li>
|
||||
</ul>
|
||||
Page 1 / 1
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<p>
|
||||
<script type="text/javascript" src="https://sessionize.com/api/speaker/sessions/83c5d14a-bd19-46b4-8335-0ac8358ac46d/0x0x91929ax">
|
||||
</script>
|
||||
</p>
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://twitter.com/ar17787">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-twitter fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://facebook.com/ar17787">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-facebook fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://github.com/armistace">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,158 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Articles in the Data Engineering category</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/data-engineering.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Articles in the Data Engineering category</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implementing Appflow in a Production Datalake">
|
||||
<h2 class="post-title">
|
||||
Implementing Appflow in a Production Datalake
|
||||
</h2>
|
||||
</a>
|
||||
<p>How Appflow simplified a major extract layer and when I choose Managed Services</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Tue 23 May 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/how-i-built-the-damn-thing.html" rel="bookmark" title="Permalink to Dawn of another blog attempt">
|
||||
<h2 class="post-title">
|
||||
Dawn of another blog attempt
|
||||
</h2>
|
||||
</a>
|
||||
<p>Containers and How I take my learnings from home and apply them to work</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 10 May 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<!-- Pager -->
|
||||
<ul class="pager">
|
||||
<li class="next">
|
||||
</li>
|
||||
</ul>
|
||||
Page 1 / 1
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,161 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>A Ridgway Musings - Articles in the How To category</title>
|
||||
|
||||
<link href="http://blog.aridgwayweb.com/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="A Ridgway Musings Full Atom Feed" />
|
||||
<link href="http://blog.aridgwayweb.com/feeds/how-to.atom.xml" type="application/atom+xml" rel="alternate" title="A Ridgway Musings Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://blog.aridgwayweb.com/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://blog.aridgwayweb.com/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://blog.aridgwayweb.com/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="A Ridgway Musings">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://blog.aridgwayweb.com/">A Ridgway Musings</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Articles in the How To category</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://blog.aridgwayweb.com/how-i-built-the-damn-thing.html" rel="bookmark" title="Permalink to A New Way To Build A Free Blog">
|
||||
<h2 class="post-title">
|
||||
A New Way To Build A Free Blog
|
||||
</h2>
|
||||
</a>
|
||||
<p>How I built this blog or doing stuff on the cheap!</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://blog.aridgwayweb.com/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Sat 18 September 2021
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<!-- Pager -->
|
||||
<ul class="pager">
|
||||
<li class="next">
|
||||
</li>
|
||||
</ul>
|
||||
Page 1 / 1
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://twitter.com/ar17787">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-twitter fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://facebook.com/ar17787">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-facebook fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://github.com/armistace">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://blog.aridgwayweb.com/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://blog.aridgwayweb.com/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://blog.aridgwayweb.com/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,158 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Articles in the Resume category</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/resume.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Articles in the Resume category</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/cover-letter.html" rel="bookmark" title="Permalink to A Cover Letter">
|
||||
<h2 class="post-title">
|
||||
A Cover Letter
|
||||
</h2>
|
||||
</a>
|
||||
<p>A Summary of what I've done and Where I'd like to go for prospective Employers</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Fri 23 February 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/resume.html" rel="bookmark" title="Permalink to A Resume">
|
||||
<h2 class="post-title">
|
||||
A Resume
|
||||
</h2>
|
||||
</a>
|
||||
<p>A Summary of My work Experience</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Fri 23 February 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<!-- Pager -->
|
||||
<ul class="pager">
|
||||
<li class="next">
|
||||
</li>
|
||||
</ul>
|
||||
Page 1 / 1
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,145 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Articles in the Server Architecture category</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/server-architecture.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Articles in the Server Architecture category</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/proxmox-cluster-1.html" rel="bookmark" title="Permalink to Building a 5 node Proxmox cluster!">
|
||||
<h2 class="post-title">
|
||||
Building a 5 node Proxmox cluster!
|
||||
</h2>
|
||||
</a>
|
||||
<p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 24 July 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<!-- Pager -->
|
||||
<ul class="pager">
|
||||
<li class="next">
|
||||
</li>
|
||||
</ul>
|
||||
Page 1 / 1
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,163 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/resume.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
|
||||
|
||||
|
||||
<meta name="tags" contents="Cover Letter" />
|
||||
<meta name="tags" contents="Resume" />
|
||||
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
|
||||
<meta property="og:type" content="article">
|
||||
<meta property="article:author" content="">
|
||||
<meta property="og:url" content="http://localhost:8000/cover-letter.html">
|
||||
<meta property="og:title" content="A Cover Letter">
|
||||
<meta property="og:description" content="">
|
||||
<meta property="og:image" content="http://localhost:8000/">
|
||||
<meta property="article:published_time" content="2024-02-23 20:00:00+10:00">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('http://localhost:8000/theme/images/post-bg.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>A Cover Letter</h1>
|
||||
<span class="meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Fri 23 February 2024
|
||||
</span>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<!-- Post Content -->
|
||||
<article>
|
||||
<p>To whom it may concern</p>
|
||||
<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p>
|
||||
<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p>
|
||||
<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p>
|
||||
<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p>
|
||||
<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p>
|
||||
<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here:</p>
|
||||
<p><a href="https://github.com/armistace/gpt-sql-generator">gpt-sql-generator</a></p>
|
||||
<p><a href="[https://github.com/armistace/datahub_dbt_sources_generator">dbt_sources_generator</a></p>
|
||||
<p>I look forward to hearing from you soon.</p>
|
||||
<p>Sincerely,</p>
|
||||
<hr>
|
||||
<p>Andrew Ridgway</p>
|
||||
</article>
|
||||
|
||||
<hr>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,399 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/all-en.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-07-24T20:00:00+10:00</updated><entry><title>Building a 5 node Proxmox cluster!</title><link href="http://localhost:8000/proxmox-cluster-1.html" rel="alternate"></link><published>2024-07-24T20:00:00+10:00</published><updated>2024-07-24T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-07-24:/proxmox-cluster-1.html</id><summary type="html"><p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p></summary><content type="html"><h4>A quick summary of this post by AI</h4>
|
||||
<p>I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) </p>
|
||||
<p><strong>Summary:</strong></p>
|
||||
<p>Quick look at some of the things I've used Proxmox fr</p>
|
||||
<ul>
|
||||
<li>I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.</li>
|
||||
<li>I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.</li>
|
||||
<li>My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.</li>
|
||||
</ul>
|
||||
<p>As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future</p>
|
||||
<p><strong>Follow-up Questions:</strong></p>
|
||||
<ol>
|
||||
<li><strong>Kubernetes Cluster:</strong></li>
|
||||
<li>What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?</li>
|
||||
<li>
|
||||
<p>How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>CI/CD with Gitea:</strong></p>
|
||||
</li>
|
||||
<li>Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?</li>
|
||||
<li>
|
||||
<p>What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Monitoring and Logging:</strong></p>
|
||||
</li>
|
||||
<li>How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?</li>
|
||||
<li>
|
||||
<p>How do you ensure the security and privacy of your data while utilizing these tools?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Future Plans:</strong></p>
|
||||
</li>
|
||||
<li>You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?</li>
|
||||
<li>Are there any other new technologies or projects you're considering for your homelab in the near future?</li>
|
||||
</ol>
|
||||
<h2>A Picture is worth a thousand words</h2>
|
||||
<p><img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"></p>
|
||||
<p><em>Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on</em></p>
|
||||
<p><img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"></p>
|
||||
<h2>The idea</h2>
|
||||
<p>For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!</p>
|
||||
<p>I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.</p>
|
||||
<p>All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)</p>
|
||||
<h3>The platform for the Idea!</h3>
|
||||
<p>After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.</p>
|
||||
<p>First lets define what on earth Proxmox is</p>
|
||||
<h4>Proxmox</h4>
|
||||
<p>Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:</p>
|
||||
<ol>
|
||||
<li><strong>Simultaneous Management of LXC Containers and VMs:</strong>
|
||||
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.</li>
|
||||
</ol>
|
||||
<p>For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Efficient Resource Allocation:</strong>
|
||||
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Live Migration:</strong>
|
||||
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>High Availability:</strong>
|
||||
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Open-Source and Free:</strong>
|
||||
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.</p>
|
||||
<p><strong>Relevant Links:</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>Official Proxmox VE website: <a href="https://www.proxmox.com/">https://www.proxmox.com/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE documentation: <a href="https://pve-proxmox-community.org/">https://pve-proxmox-community.org/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE forums: <a href="https://forum.proxmox.com/">https://forum.proxmox.com/</a></p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>I'd like to thank the mistral-nemo LLM for writing that ;) </p>
|
||||
<h3>LXC's</h3>
|
||||
<p>To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.</p>
|
||||
<p><strong>Isolation Level</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>System Call Filtering</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Resource Management</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker enforces strict resource limits on every container by default.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Networking</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like <code>ip</code> or <code>bridge-utils</code>.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>What LXC is Focused On:</p>
|
||||
<p>Given these differences, here's what LXC primarily focuses on:</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Simplicity and Lightweightness</strong>: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Control and Flexibility</strong>: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Integration with Traditional Linux Tools</strong>: Since LXC uses standard Linux tools for networking (like <code>ip</code> and <code>bridge-utils</code>) and does not add its own layer, it integrates well with traditional Linux systems administration practices.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Use Cases Where Fine-grained Control is Required</strong>: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.</p>
|
||||
<p>Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!</p>
|
||||
<p>The LXC's have also been super easy to set up with the help of ttecks helper scripts <a href="https://community-scripts.github.io/Proxmox/">Proxmox Helper Scripts</a> It was very sad to hear he had gotten <a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/">sick</a> and I realy hope he gets well soon!</p>
|
||||
<h3>VM's</h3>
|
||||
<p>Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!</p>
|
||||
<p>Again I'd like to thank mistral-nemo for that very informative piece of prose ;) </p>
|
||||
<p>The big question here is what do I use the VM capablity of Proxmox for?</p>
|
||||
<p>I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)</p>
|
||||
<p>I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system</p>
|
||||
<p>Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. </p>
|
||||
<p>Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)</p></content><category term="Server Architecture"></category><category term="proxmox"></category><category term="kubernetes"></category><category term="hardware"></category></entry><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html"><p>A Summary of what I've done and Where I'd like to go for prospective Employers</p></summary><content type="html"><p>To whom it may concern</p>
|
||||
<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p>
|
||||
<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p>
|
||||
<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p>
|
||||
<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p>
|
||||
<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p>
|
||||
<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here:</p>
|
||||
<p><a href="https://github.com/armistace/gpt-sql-generator">gpt-sql-generator</a></p>
|
||||
<p><a href="[https://github.com/armistace/datahub_dbt_sources_generator">dbt_sources_generator</a></p>
|
||||
<p>I look forward to hearing from you soon.</p>
|
||||
<p>Sincerely,</p>
|
||||
<hr>
|
||||
<p>Andrew Ridgway</p></content><category term="Resume"></category><category term="Cover Letter"></category><category term="Resume"></category></entry><entry><title>A Resume</title><link href="http://localhost:8000/resume.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/resume.html</id><summary type="html"><p>A Summary of My work Experience</p></summary><content type="html"><h1>OVERVIEW</h1>
|
||||
<p>I am a Senior Data Engineer looking to transition my skills to Data and Solution
|
||||
Architecting as well as project management. I have spent the better part of the
|
||||
last decade refining my abilities in taking business requirements and turning
|
||||
those into actionable data engineering, analytics, and software projects with
|
||||
trackable metrics. I believe in agnosticism when it comes to coding languages
|
||||
and have experimented in my own time with many different languages. In my
|
||||
career I have used Python, .NET, PowerShell, TSQL, VB and SAS (multiple
|
||||
products) in an Enterprise capacity. I also have experience using Google Cloud
|
||||
Platform and AWS tools for ETL and data platform development as well as git
|
||||
for version control and deployment using various IAC tools. I have also
|
||||
conducted data analysis and modelling on business metrics to find relationships
|
||||
between both staff and customer behavior and produced actionable
|
||||
recommendations based on the conclusions. In a private context I have also
|
||||
experimented with C, C# and Kotlin I am looking to further my career by taking
|
||||
my passion for data engineering and analysis as well as web and software
|
||||
development and applying it in a strategic context.</p>
|
||||
<h1>SKILLS &amp; ABILITIES</h1>
|
||||
<ul>
|
||||
<li>Python (scripting, compiling, notebooks – Sagemaker, Jupyter)</li>
|
||||
<li>git</li>
|
||||
<li>SAS (Base, EG, VA)</li>
|
||||
<li>Various Google Cloud Tools (Data Fusion, Compute Engine, Cloud Functions)</li>
|
||||
<li>Various Amazon Tools (EC2, RDS, Kinesis, Glue, Redshift, Lambda, ECS, ECR, EKS)</li>
|
||||
<li>Streaming Technologies (Kafka, Hive, Spark Streaming)</li>
|
||||
<li>Various DB platforms both on Prem and Serverless (MariaDB/MySql,</li>
|
||||
<li>Postgres/Redshift, SQL Server, RDS/Aurora variants)</li>
|
||||
<li>Various Microsoft Products (PowerBI, TSQL, Excel, VBA)</li>
|
||||
<li>Linux Server Administration (cron, bash, systemD)</li>
|
||||
<li>ETL/ELT Development</li>
|
||||
<li>Basic Data Modelling (Kimball, SCD Type 2)</li>
|
||||
<li>IAC (Cloud Formation, Terraform)</li>
|
||||
<li>Datahub Deployment</li>
|
||||
<li>Dagster Orchestration Deployments</li>
|
||||
<li>DBT Modelling and Design Deployments</li>
|
||||
<li>Containerised and Cloud Driven Data Architecture</li>
|
||||
</ul>
|
||||
<h1>EXPERIENCE</h1>
|
||||
<h2>Cloud Data Architect</h2>
|
||||
<h3><em>Redeye Apps</em></h3>
|
||||
<h4><em>May 2022 - Present</em></h4>
|
||||
<ul>
|
||||
<li>Greenfields Research, Design and Deployment of S3 datalake (Parquet)</li>
|
||||
<li>AWS DMS, S3, Athena, Glue</li>
|
||||
<li>Research Design and Deployment of Catalog (Datahub)</li>
|
||||
<li>Design of Data Governance Process (Datahub driven)</li>
|
||||
<li>Research Design and Deployment of Orchestration and Modelling for Transforms (Dagster/DBT into Mesos)</li>
|
||||
<li>CI/CD design and deployment of modelling and orchestration using Gitlab</li>
|
||||
<li>Research, Design and Deployment of ML Ops Dev pipelines anddeployment strategy</li>
|
||||
<li>Design of ETL/Pipelines (DBT)</li>
|
||||
<li>Design of Customer Facing Data Products and deployment methodologies (Fully automated via Kakfa/Dagster/DBT)</li>
|
||||
</ul>
|
||||
<h2>Data Engineer,</h2>
|
||||
<h3><em>TechConnect IT Solutions</em></h3>
|
||||
<h4><em>August 2021 – May 2022</em></h4>
|
||||
<ul>
|
||||
<li>Design of Cloud Data Batch ETL solutions using Python (Glue)</li>
|
||||
<li>Design of Cloud Data Streaming ETL solution using Python (Kinesis)</li>
|
||||
<li>Solve complex client business problems using software to join and transform data from DB’s, Web API’s, Application API’s and System logs</li>
|
||||
<li>Build CI/CD pipelines to ensure smooth deployments (Bitbucket, gitlab)</li>
|
||||
<li>Apply Prebuilt ML models to software solutions (Sagemaker)</li>
|
||||
<li>Assist with the architecting of Containerisation solutions (Docker, ECS, ECR)</li>
|
||||
<li>API testing and development (gRPC, Rest)</li>
|
||||
</ul>
|
||||
<h2>Enterprise Data Warehouse Developer</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>August 2019 - August 2021</em></h4>
|
||||
<ul>
|
||||
<li>ETL development of CRM, WFP, Outbound Dialer, Inbound switch in Google Cloud, SAS, TSQL</li>
|
||||
<li>Bringing new data to the business to analyse for new insights</li>
|
||||
<li>Redeveloped Version Control and brought git to the data team</li>
|
||||
<li>Introduced python for API enablement in the Enterprise Data Warehouse</li>
|
||||
<li>Partnering with the business to focus data project on actual need and translating into technical requirements</li>
|
||||
</ul>
|
||||
<h2>Business Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2018 - August 2019</em></h4>
|
||||
<ul>
|
||||
<li>Automate Service Performance Reporting using PowerShell/VBA/SAS</li>
|
||||
<li>Learn and leverage SAS EG and VA to streamline Microsoft Excel Reporting</li>
|
||||
<li>Identify and develop data pipelines to source data from multiple sources easily and collate into a single source to identify relationships and trends</li>
|
||||
<li>Technologies used include VBA, PowerShell, SQL, Web API’s, SAS</li>
|
||||
<li>Where SAS is inappropriate use VBA to automate processes in Microsoft Access and Excel</li>
|
||||
<li>Gather Requirements to build meaningful reporting solutions</li>
|
||||
<li>Provide meaningful analysis on business performance and provide relevant presentations and reports to senior stakeholders.</li>
|
||||
</ul>
|
||||
<h2>Forecasting and Capacity Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2017 – January 2018</em></h4>
|
||||
<ul>
|
||||
<li>Develop the outbound forecasting model for the Auto and General sales call center by analysing the relationship between customer decisions and workload drivers</li>
|
||||
<li>This includes the complete data pipeline for the model from identifying and sourcing data, building the reporting and analysing the data and associated drivers.</li>
|
||||
<li>Forecast inbound workload requirements for the Auto and General sales call center using time series analysis</li>
|
||||
<li>Learn and leverage the Aspect Workforce Management System to ensure efficiency of forecast generation</li>
|
||||
<li>Learn and leverage the capabilities of SAS Enterprise Guide to improve accuracy</li>
|
||||
<li>Liaise with people across the business to ensure meaningful, accurate analysis is provided to senior stakeholders</li>
|
||||
<li>Analyse monthly, weekly and intraday requirements and ensure forecast is accurately predicting workload for breaks, meetings and Leave</li>
|
||||
</ul>
|
||||
<h2>Senior HR Performance Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>June 2016 - January 2017</em></h4>
|
||||
<ul>
|
||||
<li>Harmonise various systems to develop a unified workforce reporting and analysis framework with appropriate metrics</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h2>Workforce Business Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>July 2015 – June 2016</em></h4>
|
||||
<ul>
|
||||
<li>Develop and refine current workforce analysis techniques and databases</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Act as liaison between shared service providers and executives and facilitate communication during the implementation of a payroll leave audit</li>
|
||||
<li>Gather reporting requirements from various business areas and produce ad-hoc and regular reports as required</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h1>EDUCATION</h1>
|
||||
<ul>
|
||||
<li>2011 Bachelor of Business Management, University of Queensland</li>
|
||||
<li>2008 Bachelor of Arts, University of Queensland</li>
|
||||
</ul>
|
||||
<h1>REFERENCES</h1>
|
||||
<ul>
|
||||
<li>Anthony Stiller Lead Developer, Data warehousing, Queensland Health</li>
|
||||
</ul>
|
||||
<p><em>0428 038 031</em></p>
|
||||
<ul>
|
||||
<li>Jaime Brian Head of Cloud Ninjas, TechConnect</li>
|
||||
</ul>
|
||||
<p><em>0422 012 17</em></p></content><category term="Resume"></category><category term="Cover Letter"></category><category term="Resume"></category></entry><entry><title>Metabase and DuckDB</title><link href="http://localhost:8000/metabase-duckdb.html" rel="alternate"></link><published>2023-11-15T20:00:00+10:00</published><updated>2023-11-15T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-11-15:/metabase-duckdb.html</id><summary type="html"><p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p></summary><content type="html"><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p>
|
||||
<p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p>
|
||||
<h3>The Beginnings of an Idea</h3>
|
||||
<p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p>
|
||||
<p><img alt="Duckdb Architecture" height="auto" width="100%" src="http://localhost:8000/images/metabase_duckdb.png"></p>
|
||||
<p>But you'll notice this pretty glossed over line, "Connector", that right there is the clincher. So what is this "Connector"?. </p>
|
||||
<p>To Deep dive into this would take a whole blog so to give you something to quickly wrap your head around its the glue that will make metabase be able to query your data source. The reality is its a jdbc driver compiled against metabase. </p>
|
||||
<p>Thankfully Metabase point you to a <a href="https://github.com/AlexR2D2/metabase_duckdb_driver">community driver</a> for linking to duckdb ( hopefully it will be brought into metabase proper sooner rather than later ) </p>
|
||||
<p>Now the release of this driver is still compiled against 0.8 of duckdb and 0.9 is the latest stable but hopefully the <a href="https://github.com/AlexR2D2/metabase_duckdb_driver/pull/19">PR</a> for this will land very soon giving a good quick way to link to the latest and greatest in duckdb from metabase</p>
|
||||
<h3>But How do we get Data?</h3>
|
||||
<p>Brilliant, using the recomended DockerFile we can load up a metabase container with the duckdb driver pre built</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">46.2</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">AlexR2D2</span><span class="o">/</span><span class="n">metabase_duckdb_driver</span><span class="o">/</span><span class="n">releases</span><span class="o">/</span><span class="n">download</span><span class="o">/</span><span class="mf">0.1</span><span class="o">.</span><span class="mi">6</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;java&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;-jar&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;/home/metabase.jar&quot;</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>Great Now the big question. How do we get the data into the damn thing. Interestingly initially when I was designing this I had the thought of leveraging the in memory capabilities of duckdb and pulling in from the parquet on s3 directly as needed, after all the cluster is on AWS so the s3 API requests should be unbelievably fast anyway so why bother with a persistent database? </p>
|
||||
<p>Now that we have the default credentials chain it is trivial to call parquet from s3</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">SELECT</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="k">FROM</span><span class="w"> </span><span class="n">read_parquet</span><span class="p">(</span><span class="s1">&#39;s3://&lt;bucket&gt;/&lt;file&gt;&#39;</span><span class="p">);</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>However, if you're reading direct off parquet all of a sudden you need to consider the partioning and I also found out that, if the parquet is being actively written to at the time of quering, duckdb has a hissyfit about metadata not matching the query. Needless to say duckdb and streaming parquet are not happy bed fellows (<em>and frankly were not desined to be so this is ok</em>). And the idea of trying to explain all this to the run of the mill reporting analyst whom it is my hope is a business sort of person not tech honestly gave me hives.. so I had to make it easier</p>
|
||||
<p>The compromise occured to me... the curated layer is only built daily for reporting, and using that, I could create a duckdb file on disk that could be loaded into the metabase container itself.</p>
|
||||
<p>With some very simple python as an operation in our orchestrator I had a job that would read direct from our curated parquet and create a duckdb file with it.. without giving away to much the job primarily consisted of this </p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">def</span> <span class="nf">duckdb_builder</span><span class="p">(</span><span class="n">table</span><span class="p">):</span>
|
||||
<span class="n">conn</span> <span class="o">=</span> <span class="n">duckdb</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="s2">&quot;curated_duckdb.duckdb&quot;</span><span class="p">)</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;CALL load_aws_credentials(&#39;</span><span class="si">{</span><span class="n">aws_profile</span><span class="si">}</span><span class="s2">&#39;)&quot;</span><span class="p">)</span>
|
||||
<span class="c1">#This removes a lot of weirdass ANSI in logs you DO NOT WANT</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">execute</span><span class="p">(</span><span class="s2">&quot;PRAGMA enable_progress_bar=false&quot;</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Create </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> in duckdb&quot;</span><span class="p">)</span>
|
||||
<span class="n">sql</span> <span class="o">=</span> <span class="sa">f</span><span class="s2">&quot;CREATE OR REPLACE TABLE </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> AS SELECT * FROM read_parquet(&#39;s3://</span><span class="si">{</span><span class="n">curated_bucket</span><span class="si">}</span><span class="s2">/</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2">/*&#39;)&quot;</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="n">sql</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> Created&quot;</span><span class="p">)</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And then an upload to an s3 bucket</p>
|
||||
<p>This of course necessated a cron job baked in to the metabase container itself to actually pull the duckdb in every morning. After some carefuly analysis of time (because I'm do lazy to implement message queues) I set up a s3 cp job that could be cronned direct from the container itself. This gives us a self updating metabase container pulling with a duckdb backend for client facing reporting right in the interface. AND because of the fact the duckdb is baked right into the container... there are NO associated s3 or dpu costs (merely the cost of running a relatively large container)</p>
|
||||
<p>The final Dockerfile looks like this</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">47.6</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">mkdir</span><span class="w"> </span><span class="o">-</span><span class="n">p</span><span class="w"> </span><span class="o">/</span><span class="n">duckdb_data</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">entrypoint</span><span class="o">.</span><span class="n">sh</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">helper_scripts</span><span class="o">/</span><span class="n">download_duckdb</span><span class="o">.</span><span class="n">py</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">update</span><span class="w"> </span><span class="o">-</span><span class="n">y</span><span class="w"> </span><span class="o">&amp;&amp;</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">upgrade</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">python3</span><span class="w"> </span><span class="n">python3</span><span class="o">-</span><span class="n">pip</span><span class="w"> </span><span class="n">cron</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">pip3</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">boto3</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span><span class="n">l</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">cat</span><span class="p">;</span><span class="w"> </span><span class="n">echo</span><span class="w"> </span><span class="s2">&quot;0 */6 * * * python3 /home/helper_scripts/download_duckdb.py&quot;</span><span class="p">;</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;bash&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;/home/entrypoint.sh&quot;</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And there we have it... an in memory containerised reporting solution with blazing fast capability to aggregate and build reports based on curated data direct from the business.. fully automated and deployable via CI/CD, that provides data updates daily.</p>
|
||||
<p>Now the embedded part.. which isn't built yet but I'll make sure to update you once we have/if we do because the architecture is very exciting for an embbdedded reporting workflow that is deployable via CI/CD processes to applications. As a little taster I'll point you to the <a href="https://www.metabase.com/learn/administration/git-based-workflow">metabase documentation</a>, the unfortunate thing about it is Metabase <em>have</em> hidden this behind the enterprise license.. but I can absolutely see why. If we get to implementing this I'll be sure to update you here on the learnings.</p>
|
||||
<p>Until then....</p></content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implementing Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html"><p>How Appflow simplified a major extract layer and when I choose Managed Services</p></summary><content type="html"><p>I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called <a href="https://aws.amazon.com/appflow/">Amazon Appflow</a>. This product <em>claimed</em> to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.</p>
|
||||
<p>This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. </p>
|
||||
<p>Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.</p>
|
||||
<h3>Datalake Extraction Layer</h3>
|
||||
<p>I often find that the flakiest part of any data solution, or at least a data solution that consumes data other applications create, is the extraction layer. If you are going to get a bug its going to be here, not always, but in my experience first port of call is... did it load :/ </p>
|
||||
<p>It is why I believe one of the most saturated parts of the enterprise data market is in fact the extraction layer. It seems every man and his dog (not to mention start up ) seems to be trying to "solve" this problem. The result is often that, as a data architect, you are spoilt for choice. BUT it seems that every different type of connection requires a different extractor, all for varying costs and with varying success. </p>
|
||||
<p>The RDBMS extraction space is largely solved, and there are products like <a href="https://www.qlik.com/us/products/qlik-replicate">Qlick replicate</a>, or <a href="https://aws.amazon.com/dms/">AWS DMS</a> as well as countless others that can do this at the CDC level and the work relatively well, albeit at a considerable cost. </p>
|
||||
<p>The API landscape for extraction is particularly saturated. I believe I saw on linkedin a graphic showing no less than 50 companies offering extraction from API endpoints, I'm not offey with all of them but they largely seem to <em>claim</em> to achieve the same goal, with varying levels of depth.</p>
|
||||
<p>This proliferation of API extractors obviously coinccides with the proliferation of SAAS products taking over from bespoke software that enterprises would have once ran with, hooked up to their existing enterprise DB's and used. This new landscape seems also shows that rather than an enterprise owning there data, they often need the skills, and increasingly $$$'s to access it.</p>
|
||||
<p>This complexity for access is normally coupled with poor documentation, where its a crapshoot as to whether there is an swaggerui, let alone useful API documentation (this is getting better though)</p>
|
||||
<h3>So why Managed for Extraction?</h3>
|
||||
<p>As you see above when you're extracting data it is so often a crapshoot and writing something bespoke is so incrediblly risky that the idea of it gives me hives. I could write a containerised python function for each of my API extractions, or a small batch loader for RDBMS myself and have a small cluster of these things extracting from tables and API endpoints but the thought of managing all of that, especially in a 1 man DataOps team is far to overwhelming.</p>
|
||||
<p>And Right there is my criteria for choosing a managed server.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p>Do I want to manage this myself?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is there any benefit to me managing this?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is it more cost effective to have someone else manage it?</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Invariably, the extraction layer, at least when answering the questions above, gives me the irks and I just decide to run with a simple managed service where I can point at the source and target click go and watch it go brrrrrrrrrrrrr</p>
|
||||
<p>When you couple ease of use with the relative reliability the value proposition of designing bespoke applications for the extraction task rapidly decreases, at least for me</p>
|
||||
<p>And this is why Extraction, at least in systems I design, is more often than not handled by a managed service, and why AppFlow, with the concept of a managed service for API calls to s3, was a cool tech I had to swing a chance to play with.</p>
|
||||
<h3>AppFlow, The Good, The Bad, The Ugly</h3>
|
||||
<p>Using AppFlow turned out to be a largely simple affair, even in Terraform, Once you have the correct Authentication tokens its more or less select the service you want and then create a "flow" for each endpoint. The complex part is the "Map_All" function for the endpoint. When triggered it automtically create a 1 - 1 mapping for all fields in the endpoint into the target file (in my case parquet) BUT this actually fundamentaly changes the flow you have created and thus causes terraform to shit the bed. This can be dealt with via a lifecycle rule, but means schema changes in the endpoint could cause issues in the future. </p>
|
||||
<p>All in All having a Managed Service to manage API endpoint extraction has been great and enabled the expansion of a datalake with no bespoke application code to manage the extraction of information from API endpoints which has proved to be a massive time and money saver overall</p>
|
||||
<p>I am yet to play with establishing a custom endpoint and it will be interesting to see just how much work this is compared with writing the code for a bespoke application... sounds like a good blog post if I get to do it one day.</p></content><category term="Data Engineering"></category><category term="data engineering"></category><category term="Amazon"></category><category term="Managed Services"></category></entry><entry><title>Dawn of another blog attempt</title><link href="http://localhost:8000/how-i-built-the-damn-thing.html" rel="alternate"></link><published>2023-05-10T20:00:00+10:00</published><updated>2023-05-10T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-10:/how-i-built-the-damn-thing.html</id><summary type="html"><p>Containers and How I take my learnings from home and apply them to work</p></summary><content type="html"><p>So, once again I'm trying this blog thing out. For the first time though I'm not going to make it niche, or cultral, but just whatever I feel like writing about. For a number of years now my day job has been in and around the world of data. Starting out as a "Workforce Analyst" (read downloading csv's of payroll data and making excel report) and over time moving to my current role where I build and design systems for ingesting data from various systems systems to allow analysts and Data Scientists. My hobby however has been... well.. tech. These two things have over time merged into the weirdness that is my professional life and I'd like to take elements of this life and share my learnings.</p>
|
||||
<p>The core reason for this is that I keep reading that its great to write. The other is I've decided that getting my thoughts into some form of order might be beneficial both to me and perhaps a wider audience. There are so many things I've attempted, succeeded and failed at, that, at the ver least, it will be worth getting them into a central repository of knowledge so that I, and maybe others, can share and use as time progresses. I also keep seeing on <a href="https://news.ycombinator.com">Hacker News</a> a lot of refernences to the guys who've been writing blogs since the early days of the internet and I want to contribute my little pie to what I want the internet to be</p>
|
||||
<p>So strap yourselves in as I take you on my data/self hosting journey, sprinkled with a little dev ops and data engineering to wet your appetite over the next little while. Sometimes I might even throw in some cultral or policitcal commentry just to keep things spicy!</p></content><category term="Data Engineering"></category><category term="data engineering"></category><category term="containers"></category></entry></feed>
|
@ -1,399 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/all.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-07-24T20:00:00+10:00</updated><entry><title>Building a 5 node Proxmox cluster!</title><link href="http://localhost:8000/proxmox-cluster-1.html" rel="alternate"></link><published>2024-07-24T20:00:00+10:00</published><updated>2024-07-24T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-07-24:/proxmox-cluster-1.html</id><summary type="html"><p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p></summary><content type="html"><h4>A quick summary of this post by AI</h4>
|
||||
<p>I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) </p>
|
||||
<p><strong>Summary:</strong></p>
|
||||
<p>Quick look at some of the things I've used Proxmox fr</p>
|
||||
<ul>
|
||||
<li>I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.</li>
|
||||
<li>I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.</li>
|
||||
<li>My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.</li>
|
||||
</ul>
|
||||
<p>As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future</p>
|
||||
<p><strong>Follow-up Questions:</strong></p>
|
||||
<ol>
|
||||
<li><strong>Kubernetes Cluster:</strong></li>
|
||||
<li>What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?</li>
|
||||
<li>
|
||||
<p>How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>CI/CD with Gitea:</strong></p>
|
||||
</li>
|
||||
<li>Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?</li>
|
||||
<li>
|
||||
<p>What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Monitoring and Logging:</strong></p>
|
||||
</li>
|
||||
<li>How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?</li>
|
||||
<li>
|
||||
<p>How do you ensure the security and privacy of your data while utilizing these tools?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Future Plans:</strong></p>
|
||||
</li>
|
||||
<li>You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?</li>
|
||||
<li>Are there any other new technologies or projects you're considering for your homelab in the near future?</li>
|
||||
</ol>
|
||||
<h2>A Picture is worth a thousand words</h2>
|
||||
<p><img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"></p>
|
||||
<p><em>Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on</em></p>
|
||||
<p><img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"></p>
|
||||
<h2>The idea</h2>
|
||||
<p>For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!</p>
|
||||
<p>I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.</p>
|
||||
<p>All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)</p>
|
||||
<h3>The platform for the Idea!</h3>
|
||||
<p>After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.</p>
|
||||
<p>First lets define what on earth Proxmox is</p>
|
||||
<h4>Proxmox</h4>
|
||||
<p>Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:</p>
|
||||
<ol>
|
||||
<li><strong>Simultaneous Management of LXC Containers and VMs:</strong>
|
||||
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.</li>
|
||||
</ol>
|
||||
<p>For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Efficient Resource Allocation:</strong>
|
||||
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Live Migration:</strong>
|
||||
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>High Availability:</strong>
|
||||
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Open-Source and Free:</strong>
|
||||
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.</p>
|
||||
<p><strong>Relevant Links:</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>Official Proxmox VE website: <a href="https://www.proxmox.com/">https://www.proxmox.com/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE documentation: <a href="https://pve-proxmox-community.org/">https://pve-proxmox-community.org/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE forums: <a href="https://forum.proxmox.com/">https://forum.proxmox.com/</a></p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>I'd like to thank the mistral-nemo LLM for writing that ;) </p>
|
||||
<h3>LXC's</h3>
|
||||
<p>To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.</p>
|
||||
<p><strong>Isolation Level</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>System Call Filtering</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Resource Management</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker enforces strict resource limits on every container by default.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Networking</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like <code>ip</code> or <code>bridge-utils</code>.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>What LXC is Focused On:</p>
|
||||
<p>Given these differences, here's what LXC primarily focuses on:</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Simplicity and Lightweightness</strong>: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Control and Flexibility</strong>: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Integration with Traditional Linux Tools</strong>: Since LXC uses standard Linux tools for networking (like <code>ip</code> and <code>bridge-utils</code>) and does not add its own layer, it integrates well with traditional Linux systems administration practices.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Use Cases Where Fine-grained Control is Required</strong>: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.</p>
|
||||
<p>Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!</p>
|
||||
<p>The LXC's have also been super easy to set up with the help of ttecks helper scripts <a href="https://community-scripts.github.io/Proxmox/">Proxmox Helper Scripts</a> It was very sad to hear he had gotten <a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/">sick</a> and I realy hope he gets well soon!</p>
|
||||
<h3>VM's</h3>
|
||||
<p>Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!</p>
|
||||
<p>Again I'd like to thank mistral-nemo for that very informative piece of prose ;) </p>
|
||||
<p>The big question here is what do I use the VM capablity of Proxmox for?</p>
|
||||
<p>I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)</p>
|
||||
<p>I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system</p>
|
||||
<p>Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. </p>
|
||||
<p>Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)</p></content><category term="Server Architecture"></category><category term="proxmox"></category><category term="kubernetes"></category><category term="hardware"></category></entry><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html"><p>A Summary of what I've done and Where I'd like to go for prospective Employers</p></summary><content type="html"><p>To whom it may concern</p>
|
||||
<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p>
|
||||
<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p>
|
||||
<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p>
|
||||
<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p>
|
||||
<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p>
|
||||
<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here:</p>
|
||||
<p><a href="https://github.com/armistace/gpt-sql-generator">gpt-sql-generator</a></p>
|
||||
<p><a href="[https://github.com/armistace/datahub_dbt_sources_generator">dbt_sources_generator</a></p>
|
||||
<p>I look forward to hearing from you soon.</p>
|
||||
<p>Sincerely,</p>
|
||||
<hr>
|
||||
<p>Andrew Ridgway</p></content><category term="Resume"></category><category term="Cover Letter"></category><category term="Resume"></category></entry><entry><title>A Resume</title><link href="http://localhost:8000/resume.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/resume.html</id><summary type="html"><p>A Summary of My work Experience</p></summary><content type="html"><h1>OVERVIEW</h1>
|
||||
<p>I am a Senior Data Engineer looking to transition my skills to Data and Solution
|
||||
Architecting as well as project management. I have spent the better part of the
|
||||
last decade refining my abilities in taking business requirements and turning
|
||||
those into actionable data engineering, analytics, and software projects with
|
||||
trackable metrics. I believe in agnosticism when it comes to coding languages
|
||||
and have experimented in my own time with many different languages. In my
|
||||
career I have used Python, .NET, PowerShell, TSQL, VB and SAS (multiple
|
||||
products) in an Enterprise capacity. I also have experience using Google Cloud
|
||||
Platform and AWS tools for ETL and data platform development as well as git
|
||||
for version control and deployment using various IAC tools. I have also
|
||||
conducted data analysis and modelling on business metrics to find relationships
|
||||
between both staff and customer behavior and produced actionable
|
||||
recommendations based on the conclusions. In a private context I have also
|
||||
experimented with C, C# and Kotlin I am looking to further my career by taking
|
||||
my passion for data engineering and analysis as well as web and software
|
||||
development and applying it in a strategic context.</p>
|
||||
<h1>SKILLS &amp; ABILITIES</h1>
|
||||
<ul>
|
||||
<li>Python (scripting, compiling, notebooks – Sagemaker, Jupyter)</li>
|
||||
<li>git</li>
|
||||
<li>SAS (Base, EG, VA)</li>
|
||||
<li>Various Google Cloud Tools (Data Fusion, Compute Engine, Cloud Functions)</li>
|
||||
<li>Various Amazon Tools (EC2, RDS, Kinesis, Glue, Redshift, Lambda, ECS, ECR, EKS)</li>
|
||||
<li>Streaming Technologies (Kafka, Hive, Spark Streaming)</li>
|
||||
<li>Various DB platforms both on Prem and Serverless (MariaDB/MySql,</li>
|
||||
<li>Postgres/Redshift, SQL Server, RDS/Aurora variants)</li>
|
||||
<li>Various Microsoft Products (PowerBI, TSQL, Excel, VBA)</li>
|
||||
<li>Linux Server Administration (cron, bash, systemD)</li>
|
||||
<li>ETL/ELT Development</li>
|
||||
<li>Basic Data Modelling (Kimball, SCD Type 2)</li>
|
||||
<li>IAC (Cloud Formation, Terraform)</li>
|
||||
<li>Datahub Deployment</li>
|
||||
<li>Dagster Orchestration Deployments</li>
|
||||
<li>DBT Modelling and Design Deployments</li>
|
||||
<li>Containerised and Cloud Driven Data Architecture</li>
|
||||
</ul>
|
||||
<h1>EXPERIENCE</h1>
|
||||
<h2>Cloud Data Architect</h2>
|
||||
<h3><em>Redeye Apps</em></h3>
|
||||
<h4><em>May 2022 - Present</em></h4>
|
||||
<ul>
|
||||
<li>Greenfields Research, Design and Deployment of S3 datalake (Parquet)</li>
|
||||
<li>AWS DMS, S3, Athena, Glue</li>
|
||||
<li>Research Design and Deployment of Catalog (Datahub)</li>
|
||||
<li>Design of Data Governance Process (Datahub driven)</li>
|
||||
<li>Research Design and Deployment of Orchestration and Modelling for Transforms (Dagster/DBT into Mesos)</li>
|
||||
<li>CI/CD design and deployment of modelling and orchestration using Gitlab</li>
|
||||
<li>Research, Design and Deployment of ML Ops Dev pipelines anddeployment strategy</li>
|
||||
<li>Design of ETL/Pipelines (DBT)</li>
|
||||
<li>Design of Customer Facing Data Products and deployment methodologies (Fully automated via Kakfa/Dagster/DBT)</li>
|
||||
</ul>
|
||||
<h2>Data Engineer,</h2>
|
||||
<h3><em>TechConnect IT Solutions</em></h3>
|
||||
<h4><em>August 2021 – May 2022</em></h4>
|
||||
<ul>
|
||||
<li>Design of Cloud Data Batch ETL solutions using Python (Glue)</li>
|
||||
<li>Design of Cloud Data Streaming ETL solution using Python (Kinesis)</li>
|
||||
<li>Solve complex client business problems using software to join and transform data from DB’s, Web API’s, Application API’s and System logs</li>
|
||||
<li>Build CI/CD pipelines to ensure smooth deployments (Bitbucket, gitlab)</li>
|
||||
<li>Apply Prebuilt ML models to software solutions (Sagemaker)</li>
|
||||
<li>Assist with the architecting of Containerisation solutions (Docker, ECS, ECR)</li>
|
||||
<li>API testing and development (gRPC, Rest)</li>
|
||||
</ul>
|
||||
<h2>Enterprise Data Warehouse Developer</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>August 2019 - August 2021</em></h4>
|
||||
<ul>
|
||||
<li>ETL development of CRM, WFP, Outbound Dialer, Inbound switch in Google Cloud, SAS, TSQL</li>
|
||||
<li>Bringing new data to the business to analyse for new insights</li>
|
||||
<li>Redeveloped Version Control and brought git to the data team</li>
|
||||
<li>Introduced python for API enablement in the Enterprise Data Warehouse</li>
|
||||
<li>Partnering with the business to focus data project on actual need and translating into technical requirements</li>
|
||||
</ul>
|
||||
<h2>Business Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2018 - August 2019</em></h4>
|
||||
<ul>
|
||||
<li>Automate Service Performance Reporting using PowerShell/VBA/SAS</li>
|
||||
<li>Learn and leverage SAS EG and VA to streamline Microsoft Excel Reporting</li>
|
||||
<li>Identify and develop data pipelines to source data from multiple sources easily and collate into a single source to identify relationships and trends</li>
|
||||
<li>Technologies used include VBA, PowerShell, SQL, Web API’s, SAS</li>
|
||||
<li>Where SAS is inappropriate use VBA to automate processes in Microsoft Access and Excel</li>
|
||||
<li>Gather Requirements to build meaningful reporting solutions</li>
|
||||
<li>Provide meaningful analysis on business performance and provide relevant presentations and reports to senior stakeholders.</li>
|
||||
</ul>
|
||||
<h2>Forecasting and Capacity Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2017 – January 2018</em></h4>
|
||||
<ul>
|
||||
<li>Develop the outbound forecasting model for the Auto and General sales call center by analysing the relationship between customer decisions and workload drivers</li>
|
||||
<li>This includes the complete data pipeline for the model from identifying and sourcing data, building the reporting and analysing the data and associated drivers.</li>
|
||||
<li>Forecast inbound workload requirements for the Auto and General sales call center using time series analysis</li>
|
||||
<li>Learn and leverage the Aspect Workforce Management System to ensure efficiency of forecast generation</li>
|
||||
<li>Learn and leverage the capabilities of SAS Enterprise Guide to improve accuracy</li>
|
||||
<li>Liaise with people across the business to ensure meaningful, accurate analysis is provided to senior stakeholders</li>
|
||||
<li>Analyse monthly, weekly and intraday requirements and ensure forecast is accurately predicting workload for breaks, meetings and Leave</li>
|
||||
</ul>
|
||||
<h2>Senior HR Performance Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>June 2016 - January 2017</em></h4>
|
||||
<ul>
|
||||
<li>Harmonise various systems to develop a unified workforce reporting and analysis framework with appropriate metrics</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h2>Workforce Business Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>July 2015 – June 2016</em></h4>
|
||||
<ul>
|
||||
<li>Develop and refine current workforce analysis techniques and databases</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Act as liaison between shared service providers and executives and facilitate communication during the implementation of a payroll leave audit</li>
|
||||
<li>Gather reporting requirements from various business areas and produce ad-hoc and regular reports as required</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h1>EDUCATION</h1>
|
||||
<ul>
|
||||
<li>2011 Bachelor of Business Management, University of Queensland</li>
|
||||
<li>2008 Bachelor of Arts, University of Queensland</li>
|
||||
</ul>
|
||||
<h1>REFERENCES</h1>
|
||||
<ul>
|
||||
<li>Anthony Stiller Lead Developer, Data warehousing, Queensland Health</li>
|
||||
</ul>
|
||||
<p><em>0428 038 031</em></p>
|
||||
<ul>
|
||||
<li>Jaime Brian Head of Cloud Ninjas, TechConnect</li>
|
||||
</ul>
|
||||
<p><em>0422 012 17</em></p></content><category term="Resume"></category><category term="Cover Letter"></category><category term="Resume"></category></entry><entry><title>Metabase and DuckDB</title><link href="http://localhost:8000/metabase-duckdb.html" rel="alternate"></link><published>2023-11-15T20:00:00+10:00</published><updated>2023-11-15T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-11-15:/metabase-duckdb.html</id><summary type="html"><p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p></summary><content type="html"><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p>
|
||||
<p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p>
|
||||
<h3>The Beginnings of an Idea</h3>
|
||||
<p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p>
|
||||
<p><img alt="Duckdb Architecture" height="auto" width="100%" src="http://localhost:8000/images/metabase_duckdb.png"></p>
|
||||
<p>But you'll notice this pretty glossed over line, "Connector", that right there is the clincher. So what is this "Connector"?. </p>
|
||||
<p>To Deep dive into this would take a whole blog so to give you something to quickly wrap your head around its the glue that will make metabase be able to query your data source. The reality is its a jdbc driver compiled against metabase. </p>
|
||||
<p>Thankfully Metabase point you to a <a href="https://github.com/AlexR2D2/metabase_duckdb_driver">community driver</a> for linking to duckdb ( hopefully it will be brought into metabase proper sooner rather than later ) </p>
|
||||
<p>Now the release of this driver is still compiled against 0.8 of duckdb and 0.9 is the latest stable but hopefully the <a href="https://github.com/AlexR2D2/metabase_duckdb_driver/pull/19">PR</a> for this will land very soon giving a good quick way to link to the latest and greatest in duckdb from metabase</p>
|
||||
<h3>But How do we get Data?</h3>
|
||||
<p>Brilliant, using the recomended DockerFile we can load up a metabase container with the duckdb driver pre built</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">46.2</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">AlexR2D2</span><span class="o">/</span><span class="n">metabase_duckdb_driver</span><span class="o">/</span><span class="n">releases</span><span class="o">/</span><span class="n">download</span><span class="o">/</span><span class="mf">0.1</span><span class="o">.</span><span class="mi">6</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;java&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;-jar&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;/home/metabase.jar&quot;</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>Great Now the big question. How do we get the data into the damn thing. Interestingly initially when I was designing this I had the thought of leveraging the in memory capabilities of duckdb and pulling in from the parquet on s3 directly as needed, after all the cluster is on AWS so the s3 API requests should be unbelievably fast anyway so why bother with a persistent database? </p>
|
||||
<p>Now that we have the default credentials chain it is trivial to call parquet from s3</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">SELECT</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="k">FROM</span><span class="w"> </span><span class="n">read_parquet</span><span class="p">(</span><span class="s1">&#39;s3://&lt;bucket&gt;/&lt;file&gt;&#39;</span><span class="p">);</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>However, if you're reading direct off parquet all of a sudden you need to consider the partioning and I also found out that, if the parquet is being actively written to at the time of quering, duckdb has a hissyfit about metadata not matching the query. Needless to say duckdb and streaming parquet are not happy bed fellows (<em>and frankly were not desined to be so this is ok</em>). And the idea of trying to explain all this to the run of the mill reporting analyst whom it is my hope is a business sort of person not tech honestly gave me hives.. so I had to make it easier</p>
|
||||
<p>The compromise occured to me... the curated layer is only built daily for reporting, and using that, I could create a duckdb file on disk that could be loaded into the metabase container itself.</p>
|
||||
<p>With some very simple python as an operation in our orchestrator I had a job that would read direct from our curated parquet and create a duckdb file with it.. without giving away to much the job primarily consisted of this </p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">def</span> <span class="nf">duckdb_builder</span><span class="p">(</span><span class="n">table</span><span class="p">):</span>
|
||||
<span class="n">conn</span> <span class="o">=</span> <span class="n">duckdb</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="s2">&quot;curated_duckdb.duckdb&quot;</span><span class="p">)</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;CALL load_aws_credentials(&#39;</span><span class="si">{</span><span class="n">aws_profile</span><span class="si">}</span><span class="s2">&#39;)&quot;</span><span class="p">)</span>
|
||||
<span class="c1">#This removes a lot of weirdass ANSI in logs you DO NOT WANT</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">execute</span><span class="p">(</span><span class="s2">&quot;PRAGMA enable_progress_bar=false&quot;</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Create </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> in duckdb&quot;</span><span class="p">)</span>
|
||||
<span class="n">sql</span> <span class="o">=</span> <span class="sa">f</span><span class="s2">&quot;CREATE OR REPLACE TABLE </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> AS SELECT * FROM read_parquet(&#39;s3://</span><span class="si">{</span><span class="n">curated_bucket</span><span class="si">}</span><span class="s2">/</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2">/*&#39;)&quot;</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="n">sql</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> Created&quot;</span><span class="p">)</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And then an upload to an s3 bucket</p>
|
||||
<p>This of course necessated a cron job baked in to the metabase container itself to actually pull the duckdb in every morning. After some carefuly analysis of time (because I'm do lazy to implement message queues) I set up a s3 cp job that could be cronned direct from the container itself. This gives us a self updating metabase container pulling with a duckdb backend for client facing reporting right in the interface. AND because of the fact the duckdb is baked right into the container... there are NO associated s3 or dpu costs (merely the cost of running a relatively large container)</p>
|
||||
<p>The final Dockerfile looks like this</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">47.6</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">mkdir</span><span class="w"> </span><span class="o">-</span><span class="n">p</span><span class="w"> </span><span class="o">/</span><span class="n">duckdb_data</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">entrypoint</span><span class="o">.</span><span class="n">sh</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">helper_scripts</span><span class="o">/</span><span class="n">download_duckdb</span><span class="o">.</span><span class="n">py</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">update</span><span class="w"> </span><span class="o">-</span><span class="n">y</span><span class="w"> </span><span class="o">&amp;&amp;</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">upgrade</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">python3</span><span class="w"> </span><span class="n">python3</span><span class="o">-</span><span class="n">pip</span><span class="w"> </span><span class="n">cron</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">pip3</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">boto3</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span><span class="n">l</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">cat</span><span class="p">;</span><span class="w"> </span><span class="n">echo</span><span class="w"> </span><span class="s2">&quot;0 */6 * * * python3 /home/helper_scripts/download_duckdb.py&quot;</span><span class="p">;</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;bash&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;/home/entrypoint.sh&quot;</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And there we have it... an in memory containerised reporting solution with blazing fast capability to aggregate and build reports based on curated data direct from the business.. fully automated and deployable via CI/CD, that provides data updates daily.</p>
|
||||
<p>Now the embedded part.. which isn't built yet but I'll make sure to update you once we have/if we do because the architecture is very exciting for an embbdedded reporting workflow that is deployable via CI/CD processes to applications. As a little taster I'll point you to the <a href="https://www.metabase.com/learn/administration/git-based-workflow">metabase documentation</a>, the unfortunate thing about it is Metabase <em>have</em> hidden this behind the enterprise license.. but I can absolutely see why. If we get to implementing this I'll be sure to update you here on the learnings.</p>
|
||||
<p>Until then....</p></content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implementing Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html"><p>How Appflow simplified a major extract layer and when I choose Managed Services</p></summary><content type="html"><p>I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called <a href="https://aws.amazon.com/appflow/">Amazon Appflow</a>. This product <em>claimed</em> to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.</p>
|
||||
<p>This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. </p>
|
||||
<p>Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.</p>
|
||||
<h3>Datalake Extraction Layer</h3>
|
||||
<p>I often find that the flakiest part of any data solution, or at least a data solution that consumes data other applications create, is the extraction layer. If you are going to get a bug its going to be here, not always, but in my experience first port of call is... did it load :/ </p>
|
||||
<p>It is why I believe one of the most saturated parts of the enterprise data market is in fact the extraction layer. It seems every man and his dog (not to mention start up ) seems to be trying to "solve" this problem. The result is often that, as a data architect, you are spoilt for choice. BUT it seems that every different type of connection requires a different extractor, all for varying costs and with varying success. </p>
|
||||
<p>The RDBMS extraction space is largely solved, and there are products like <a href="https://www.qlik.com/us/products/qlik-replicate">Qlick replicate</a>, or <a href="https://aws.amazon.com/dms/">AWS DMS</a> as well as countless others that can do this at the CDC level and the work relatively well, albeit at a considerable cost. </p>
|
||||
<p>The API landscape for extraction is particularly saturated. I believe I saw on linkedin a graphic showing no less than 50 companies offering extraction from API endpoints, I'm not offey with all of them but they largely seem to <em>claim</em> to achieve the same goal, with varying levels of depth.</p>
|
||||
<p>This proliferation of API extractors obviously coinccides with the proliferation of SAAS products taking over from bespoke software that enterprises would have once ran with, hooked up to their existing enterprise DB's and used. This new landscape seems also shows that rather than an enterprise owning there data, they often need the skills, and increasingly $$$'s to access it.</p>
|
||||
<p>This complexity for access is normally coupled with poor documentation, where its a crapshoot as to whether there is an swaggerui, let alone useful API documentation (this is getting better though)</p>
|
||||
<h3>So why Managed for Extraction?</h3>
|
||||
<p>As you see above when you're extracting data it is so often a crapshoot and writing something bespoke is so incrediblly risky that the idea of it gives me hives. I could write a containerised python function for each of my API extractions, or a small batch loader for RDBMS myself and have a small cluster of these things extracting from tables and API endpoints but the thought of managing all of that, especially in a 1 man DataOps team is far to overwhelming.</p>
|
||||
<p>And Right there is my criteria for choosing a managed server.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p>Do I want to manage this myself?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is there any benefit to me managing this?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is it more cost effective to have someone else manage it?</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Invariably, the extraction layer, at least when answering the questions above, gives me the irks and I just decide to run with a simple managed service where I can point at the source and target click go and watch it go brrrrrrrrrrrrr</p>
|
||||
<p>When you couple ease of use with the relative reliability the value proposition of designing bespoke applications for the extraction task rapidly decreases, at least for me</p>
|
||||
<p>And this is why Extraction, at least in systems I design, is more often than not handled by a managed service, and why AppFlow, with the concept of a managed service for API calls to s3, was a cool tech I had to swing a chance to play with.</p>
|
||||
<h3>AppFlow, The Good, The Bad, The Ugly</h3>
|
||||
<p>Using AppFlow turned out to be a largely simple affair, even in Terraform, Once you have the correct Authentication tokens its more or less select the service you want and then create a "flow" for each endpoint. The complex part is the "Map_All" function for the endpoint. When triggered it automtically create a 1 - 1 mapping for all fields in the endpoint into the target file (in my case parquet) BUT this actually fundamentaly changes the flow you have created and thus causes terraform to shit the bed. This can be dealt with via a lifecycle rule, but means schema changes in the endpoint could cause issues in the future. </p>
|
||||
<p>All in All having a Managed Service to manage API endpoint extraction has been great and enabled the expansion of a datalake with no bespoke application code to manage the extraction of information from API endpoints which has proved to be a massive time and money saver overall</p>
|
||||
<p>I am yet to play with establishing a custom endpoint and it will be interesting to see just how much work this is compared with writing the code for a bespoke application... sounds like a good blog post if I get to do it one day.</p></content><category term="Data Engineering"></category><category term="data engineering"></category><category term="Amazon"></category><category term="Managed Services"></category></entry><entry><title>Dawn of another blog attempt</title><link href="http://localhost:8000/how-i-built-the-damn-thing.html" rel="alternate"></link><published>2023-05-10T20:00:00+10:00</published><updated>2023-05-10T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-10:/how-i-built-the-damn-thing.html</id><summary type="html"><p>Containers and How I take my learnings from home and apply them to work</p></summary><content type="html"><p>So, once again I'm trying this blog thing out. For the first time though I'm not going to make it niche, or cultral, but just whatever I feel like writing about. For a number of years now my day job has been in and around the world of data. Starting out as a "Workforce Analyst" (read downloading csv's of payroll data and making excel report) and over time moving to my current role where I build and design systems for ingesting data from various systems systems to allow analysts and Data Scientists. My hobby however has been... well.. tech. These two things have over time merged into the weirdness that is my professional life and I'd like to take elements of this life and share my learnings.</p>
|
||||
<p>The core reason for this is that I keep reading that its great to write. The other is I've decided that getting my thoughts into some form of order might be beneficial both to me and perhaps a wider audience. There are so many things I've attempted, succeeded and failed at, that, at the ver least, it will be worth getting them into a central repository of knowledge so that I, and maybe others, can share and use as time progresses. I also keep seeing on <a href="https://news.ycombinator.com">Hacker News</a> a lot of refernences to the guys who've been writing blogs since the early days of the internet and I want to contribute my little pie to what I want the internet to be</p>
|
||||
<p>So strap yourselves in as I take you on my data/self hosting journey, sprinkled with a little dev ops and data engineering to wet your appetite over the next little while. Sometimes I might even throw in some cultral or policitcal commentry just to keep things spicy!</p></content><category term="Data Engineering"></category><category term="data engineering"></category><category term="containers"></category></entry></feed>
|
@ -1,399 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Andrew Ridgway</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/andrew-ridgway.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-07-24T20:00:00+10:00</updated><entry><title>Building a 5 node Proxmox cluster!</title><link href="http://localhost:8000/proxmox-cluster-1.html" rel="alternate"></link><published>2024-07-24T20:00:00+10:00</published><updated>2024-07-24T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-07-24:/proxmox-cluster-1.html</id><summary type="html"><p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p></summary><content type="html"><h4>A quick summary of this post by AI</h4>
|
||||
<p>I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) </p>
|
||||
<p><strong>Summary:</strong></p>
|
||||
<p>Quick look at some of the things I've used Proxmox fr</p>
|
||||
<ul>
|
||||
<li>I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.</li>
|
||||
<li>I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.</li>
|
||||
<li>My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.</li>
|
||||
</ul>
|
||||
<p>As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future</p>
|
||||
<p><strong>Follow-up Questions:</strong></p>
|
||||
<ol>
|
||||
<li><strong>Kubernetes Cluster:</strong></li>
|
||||
<li>What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?</li>
|
||||
<li>
|
||||
<p>How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>CI/CD with Gitea:</strong></p>
|
||||
</li>
|
||||
<li>Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?</li>
|
||||
<li>
|
||||
<p>What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Monitoring and Logging:</strong></p>
|
||||
</li>
|
||||
<li>How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?</li>
|
||||
<li>
|
||||
<p>How do you ensure the security and privacy of your data while utilizing these tools?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Future Plans:</strong></p>
|
||||
</li>
|
||||
<li>You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?</li>
|
||||
<li>Are there any other new technologies or projects you're considering for your homelab in the near future?</li>
|
||||
</ol>
|
||||
<h2>A Picture is worth a thousand words</h2>
|
||||
<p><img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"></p>
|
||||
<p><em>Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on</em></p>
|
||||
<p><img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"></p>
|
||||
<h2>The idea</h2>
|
||||
<p>For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!</p>
|
||||
<p>I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.</p>
|
||||
<p>All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)</p>
|
||||
<h3>The platform for the Idea!</h3>
|
||||
<p>After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.</p>
|
||||
<p>First lets define what on earth Proxmox is</p>
|
||||
<h4>Proxmox</h4>
|
||||
<p>Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:</p>
|
||||
<ol>
|
||||
<li><strong>Simultaneous Management of LXC Containers and VMs:</strong>
|
||||
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.</li>
|
||||
</ol>
|
||||
<p>For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Efficient Resource Allocation:</strong>
|
||||
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Live Migration:</strong>
|
||||
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>High Availability:</strong>
|
||||
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Open-Source and Free:</strong>
|
||||
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.</p>
|
||||
<p><strong>Relevant Links:</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>Official Proxmox VE website: <a href="https://www.proxmox.com/">https://www.proxmox.com/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE documentation: <a href="https://pve-proxmox-community.org/">https://pve-proxmox-community.org/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE forums: <a href="https://forum.proxmox.com/">https://forum.proxmox.com/</a></p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>I'd like to thank the mistral-nemo LLM for writing that ;) </p>
|
||||
<h3>LXC's</h3>
|
||||
<p>To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.</p>
|
||||
<p><strong>Isolation Level</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>System Call Filtering</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Resource Management</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker enforces strict resource limits on every container by default.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Networking</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like <code>ip</code> or <code>bridge-utils</code>.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>What LXC is Focused On:</p>
|
||||
<p>Given these differences, here's what LXC primarily focuses on:</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Simplicity and Lightweightness</strong>: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Control and Flexibility</strong>: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Integration with Traditional Linux Tools</strong>: Since LXC uses standard Linux tools for networking (like <code>ip</code> and <code>bridge-utils</code>) and does not add its own layer, it integrates well with traditional Linux systems administration practices.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Use Cases Where Fine-grained Control is Required</strong>: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.</p>
|
||||
<p>Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!</p>
|
||||
<p>The LXC's have also been super easy to set up with the help of ttecks helper scripts <a href="https://community-scripts.github.io/Proxmox/">Proxmox Helper Scripts</a> It was very sad to hear he had gotten <a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/">sick</a> and I realy hope he gets well soon!</p>
|
||||
<h3>VM's</h3>
|
||||
<p>Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!</p>
|
||||
<p>Again I'd like to thank mistral-nemo for that very informative piece of prose ;) </p>
|
||||
<p>The big question here is what do I use the VM capablity of Proxmox for?</p>
|
||||
<p>I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)</p>
|
||||
<p>I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system</p>
|
||||
<p>Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. </p>
|
||||
<p>Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)</p></content><category term="Server Architecture"></category><category term="proxmox"></category><category term="kubernetes"></category><category term="hardware"></category></entry><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html"><p>A Summary of what I've done and Where I'd like to go for prospective Employers</p></summary><content type="html"><p>To whom it may concern</p>
|
||||
<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p>
|
||||
<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p>
|
||||
<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p>
|
||||
<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p>
|
||||
<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p>
|
||||
<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here:</p>
|
||||
<p><a href="https://github.com/armistace/gpt-sql-generator">gpt-sql-generator</a></p>
|
||||
<p><a href="[https://github.com/armistace/datahub_dbt_sources_generator">dbt_sources_generator</a></p>
|
||||
<p>I look forward to hearing from you soon.</p>
|
||||
<p>Sincerely,</p>
|
||||
<hr>
|
||||
<p>Andrew Ridgway</p></content><category term="Resume"></category><category term="Cover Letter"></category><category term="Resume"></category></entry><entry><title>A Resume</title><link href="http://localhost:8000/resume.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/resume.html</id><summary type="html"><p>A Summary of My work Experience</p></summary><content type="html"><h1>OVERVIEW</h1>
|
||||
<p>I am a Senior Data Engineer looking to transition my skills to Data and Solution
|
||||
Architecting as well as project management. I have spent the better part of the
|
||||
last decade refining my abilities in taking business requirements and turning
|
||||
those into actionable data engineering, analytics, and software projects with
|
||||
trackable metrics. I believe in agnosticism when it comes to coding languages
|
||||
and have experimented in my own time with many different languages. In my
|
||||
career I have used Python, .NET, PowerShell, TSQL, VB and SAS (multiple
|
||||
products) in an Enterprise capacity. I also have experience using Google Cloud
|
||||
Platform and AWS tools for ETL and data platform development as well as git
|
||||
for version control and deployment using various IAC tools. I have also
|
||||
conducted data analysis and modelling on business metrics to find relationships
|
||||
between both staff and customer behavior and produced actionable
|
||||
recommendations based on the conclusions. In a private context I have also
|
||||
experimented with C, C# and Kotlin I am looking to further my career by taking
|
||||
my passion for data engineering and analysis as well as web and software
|
||||
development and applying it in a strategic context.</p>
|
||||
<h1>SKILLS &amp; ABILITIES</h1>
|
||||
<ul>
|
||||
<li>Python (scripting, compiling, notebooks – Sagemaker, Jupyter)</li>
|
||||
<li>git</li>
|
||||
<li>SAS (Base, EG, VA)</li>
|
||||
<li>Various Google Cloud Tools (Data Fusion, Compute Engine, Cloud Functions)</li>
|
||||
<li>Various Amazon Tools (EC2, RDS, Kinesis, Glue, Redshift, Lambda, ECS, ECR, EKS)</li>
|
||||
<li>Streaming Technologies (Kafka, Hive, Spark Streaming)</li>
|
||||
<li>Various DB platforms both on Prem and Serverless (MariaDB/MySql,</li>
|
||||
<li>Postgres/Redshift, SQL Server, RDS/Aurora variants)</li>
|
||||
<li>Various Microsoft Products (PowerBI, TSQL, Excel, VBA)</li>
|
||||
<li>Linux Server Administration (cron, bash, systemD)</li>
|
||||
<li>ETL/ELT Development</li>
|
||||
<li>Basic Data Modelling (Kimball, SCD Type 2)</li>
|
||||
<li>IAC (Cloud Formation, Terraform)</li>
|
||||
<li>Datahub Deployment</li>
|
||||
<li>Dagster Orchestration Deployments</li>
|
||||
<li>DBT Modelling and Design Deployments</li>
|
||||
<li>Containerised and Cloud Driven Data Architecture</li>
|
||||
</ul>
|
||||
<h1>EXPERIENCE</h1>
|
||||
<h2>Cloud Data Architect</h2>
|
||||
<h3><em>Redeye Apps</em></h3>
|
||||
<h4><em>May 2022 - Present</em></h4>
|
||||
<ul>
|
||||
<li>Greenfields Research, Design and Deployment of S3 datalake (Parquet)</li>
|
||||
<li>AWS DMS, S3, Athena, Glue</li>
|
||||
<li>Research Design and Deployment of Catalog (Datahub)</li>
|
||||
<li>Design of Data Governance Process (Datahub driven)</li>
|
||||
<li>Research Design and Deployment of Orchestration and Modelling for Transforms (Dagster/DBT into Mesos)</li>
|
||||
<li>CI/CD design and deployment of modelling and orchestration using Gitlab</li>
|
||||
<li>Research, Design and Deployment of ML Ops Dev pipelines anddeployment strategy</li>
|
||||
<li>Design of ETL/Pipelines (DBT)</li>
|
||||
<li>Design of Customer Facing Data Products and deployment methodologies (Fully automated via Kakfa/Dagster/DBT)</li>
|
||||
</ul>
|
||||
<h2>Data Engineer,</h2>
|
||||
<h3><em>TechConnect IT Solutions</em></h3>
|
||||
<h4><em>August 2021 – May 2022</em></h4>
|
||||
<ul>
|
||||
<li>Design of Cloud Data Batch ETL solutions using Python (Glue)</li>
|
||||
<li>Design of Cloud Data Streaming ETL solution using Python (Kinesis)</li>
|
||||
<li>Solve complex client business problems using software to join and transform data from DB’s, Web API’s, Application API’s and System logs</li>
|
||||
<li>Build CI/CD pipelines to ensure smooth deployments (Bitbucket, gitlab)</li>
|
||||
<li>Apply Prebuilt ML models to software solutions (Sagemaker)</li>
|
||||
<li>Assist with the architecting of Containerisation solutions (Docker, ECS, ECR)</li>
|
||||
<li>API testing and development (gRPC, Rest)</li>
|
||||
</ul>
|
||||
<h2>Enterprise Data Warehouse Developer</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>August 2019 - August 2021</em></h4>
|
||||
<ul>
|
||||
<li>ETL development of CRM, WFP, Outbound Dialer, Inbound switch in Google Cloud, SAS, TSQL</li>
|
||||
<li>Bringing new data to the business to analyse for new insights</li>
|
||||
<li>Redeveloped Version Control and brought git to the data team</li>
|
||||
<li>Introduced python for API enablement in the Enterprise Data Warehouse</li>
|
||||
<li>Partnering with the business to focus data project on actual need and translating into technical requirements</li>
|
||||
</ul>
|
||||
<h2>Business Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2018 - August 2019</em></h4>
|
||||
<ul>
|
||||
<li>Automate Service Performance Reporting using PowerShell/VBA/SAS</li>
|
||||
<li>Learn and leverage SAS EG and VA to streamline Microsoft Excel Reporting</li>
|
||||
<li>Identify and develop data pipelines to source data from multiple sources easily and collate into a single source to identify relationships and trends</li>
|
||||
<li>Technologies used include VBA, PowerShell, SQL, Web API’s, SAS</li>
|
||||
<li>Where SAS is inappropriate use VBA to automate processes in Microsoft Access and Excel</li>
|
||||
<li>Gather Requirements to build meaningful reporting solutions</li>
|
||||
<li>Provide meaningful analysis on business performance and provide relevant presentations and reports to senior stakeholders.</li>
|
||||
</ul>
|
||||
<h2>Forecasting and Capacity Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2017 – January 2018</em></h4>
|
||||
<ul>
|
||||
<li>Develop the outbound forecasting model for the Auto and General sales call center by analysing the relationship between customer decisions and workload drivers</li>
|
||||
<li>This includes the complete data pipeline for the model from identifying and sourcing data, building the reporting and analysing the data and associated drivers.</li>
|
||||
<li>Forecast inbound workload requirements for the Auto and General sales call center using time series analysis</li>
|
||||
<li>Learn and leverage the Aspect Workforce Management System to ensure efficiency of forecast generation</li>
|
||||
<li>Learn and leverage the capabilities of SAS Enterprise Guide to improve accuracy</li>
|
||||
<li>Liaise with people across the business to ensure meaningful, accurate analysis is provided to senior stakeholders</li>
|
||||
<li>Analyse monthly, weekly and intraday requirements and ensure forecast is accurately predicting workload for breaks, meetings and Leave</li>
|
||||
</ul>
|
||||
<h2>Senior HR Performance Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>June 2016 - January 2017</em></h4>
|
||||
<ul>
|
||||
<li>Harmonise various systems to develop a unified workforce reporting and analysis framework with appropriate metrics</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h2>Workforce Business Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>July 2015 – June 2016</em></h4>
|
||||
<ul>
|
||||
<li>Develop and refine current workforce analysis techniques and databases</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Act as liaison between shared service providers and executives and facilitate communication during the implementation of a payroll leave audit</li>
|
||||
<li>Gather reporting requirements from various business areas and produce ad-hoc and regular reports as required</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h1>EDUCATION</h1>
|
||||
<ul>
|
||||
<li>2011 Bachelor of Business Management, University of Queensland</li>
|
||||
<li>2008 Bachelor of Arts, University of Queensland</li>
|
||||
</ul>
|
||||
<h1>REFERENCES</h1>
|
||||
<ul>
|
||||
<li>Anthony Stiller Lead Developer, Data warehousing, Queensland Health</li>
|
||||
</ul>
|
||||
<p><em>0428 038 031</em></p>
|
||||
<ul>
|
||||
<li>Jaime Brian Head of Cloud Ninjas, TechConnect</li>
|
||||
</ul>
|
||||
<p><em>0422 012 17</em></p></content><category term="Resume"></category><category term="Cover Letter"></category><category term="Resume"></category></entry><entry><title>Metabase and DuckDB</title><link href="http://localhost:8000/metabase-duckdb.html" rel="alternate"></link><published>2023-11-15T20:00:00+10:00</published><updated>2023-11-15T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-11-15:/metabase-duckdb.html</id><summary type="html"><p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p></summary><content type="html"><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p>
|
||||
<p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p>
|
||||
<h3>The Beginnings of an Idea</h3>
|
||||
<p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p>
|
||||
<p><img alt="Duckdb Architecture" height="auto" width="100%" src="http://localhost:8000/images/metabase_duckdb.png"></p>
|
||||
<p>But you'll notice this pretty glossed over line, "Connector", that right there is the clincher. So what is this "Connector"?. </p>
|
||||
<p>To Deep dive into this would take a whole blog so to give you something to quickly wrap your head around its the glue that will make metabase be able to query your data source. The reality is its a jdbc driver compiled against metabase. </p>
|
||||
<p>Thankfully Metabase point you to a <a href="https://github.com/AlexR2D2/metabase_duckdb_driver">community driver</a> for linking to duckdb ( hopefully it will be brought into metabase proper sooner rather than later ) </p>
|
||||
<p>Now the release of this driver is still compiled against 0.8 of duckdb and 0.9 is the latest stable but hopefully the <a href="https://github.com/AlexR2D2/metabase_duckdb_driver/pull/19">PR</a> for this will land very soon giving a good quick way to link to the latest and greatest in duckdb from metabase</p>
|
||||
<h3>But How do we get Data?</h3>
|
||||
<p>Brilliant, using the recomended DockerFile we can load up a metabase container with the duckdb driver pre built</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">46.2</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">AlexR2D2</span><span class="o">/</span><span class="n">metabase_duckdb_driver</span><span class="o">/</span><span class="n">releases</span><span class="o">/</span><span class="n">download</span><span class="o">/</span><span class="mf">0.1</span><span class="o">.</span><span class="mi">6</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;java&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;-jar&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;/home/metabase.jar&quot;</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>Great Now the big question. How do we get the data into the damn thing. Interestingly initially when I was designing this I had the thought of leveraging the in memory capabilities of duckdb and pulling in from the parquet on s3 directly as needed, after all the cluster is on AWS so the s3 API requests should be unbelievably fast anyway so why bother with a persistent database? </p>
|
||||
<p>Now that we have the default credentials chain it is trivial to call parquet from s3</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">SELECT</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="k">FROM</span><span class="w"> </span><span class="n">read_parquet</span><span class="p">(</span><span class="s1">&#39;s3://&lt;bucket&gt;/&lt;file&gt;&#39;</span><span class="p">);</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>However, if you're reading direct off parquet all of a sudden you need to consider the partioning and I also found out that, if the parquet is being actively written to at the time of quering, duckdb has a hissyfit about metadata not matching the query. Needless to say duckdb and streaming parquet are not happy bed fellows (<em>and frankly were not desined to be so this is ok</em>). And the idea of trying to explain all this to the run of the mill reporting analyst whom it is my hope is a business sort of person not tech honestly gave me hives.. so I had to make it easier</p>
|
||||
<p>The compromise occured to me... the curated layer is only built daily for reporting, and using that, I could create a duckdb file on disk that could be loaded into the metabase container itself.</p>
|
||||
<p>With some very simple python as an operation in our orchestrator I had a job that would read direct from our curated parquet and create a duckdb file with it.. without giving away to much the job primarily consisted of this </p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">def</span> <span class="nf">duckdb_builder</span><span class="p">(</span><span class="n">table</span><span class="p">):</span>
|
||||
<span class="n">conn</span> <span class="o">=</span> <span class="n">duckdb</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="s2">&quot;curated_duckdb.duckdb&quot;</span><span class="p">)</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;CALL load_aws_credentials(&#39;</span><span class="si">{</span><span class="n">aws_profile</span><span class="si">}</span><span class="s2">&#39;)&quot;</span><span class="p">)</span>
|
||||
<span class="c1">#This removes a lot of weirdass ANSI in logs you DO NOT WANT</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">execute</span><span class="p">(</span><span class="s2">&quot;PRAGMA enable_progress_bar=false&quot;</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Create </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> in duckdb&quot;</span><span class="p">)</span>
|
||||
<span class="n">sql</span> <span class="o">=</span> <span class="sa">f</span><span class="s2">&quot;CREATE OR REPLACE TABLE </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> AS SELECT * FROM read_parquet(&#39;s3://</span><span class="si">{</span><span class="n">curated_bucket</span><span class="si">}</span><span class="s2">/</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2">/*&#39;)&quot;</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="n">sql</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> Created&quot;</span><span class="p">)</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And then an upload to an s3 bucket</p>
|
||||
<p>This of course necessated a cron job baked in to the metabase container itself to actually pull the duckdb in every morning. After some carefuly analysis of time (because I'm do lazy to implement message queues) I set up a s3 cp job that could be cronned direct from the container itself. This gives us a self updating metabase container pulling with a duckdb backend for client facing reporting right in the interface. AND because of the fact the duckdb is baked right into the container... there are NO associated s3 or dpu costs (merely the cost of running a relatively large container)</p>
|
||||
<p>The final Dockerfile looks like this</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">47.6</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">mkdir</span><span class="w"> </span><span class="o">-</span><span class="n">p</span><span class="w"> </span><span class="o">/</span><span class="n">duckdb_data</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">entrypoint</span><span class="o">.</span><span class="n">sh</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">helper_scripts</span><span class="o">/</span><span class="n">download_duckdb</span><span class="o">.</span><span class="n">py</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">update</span><span class="w"> </span><span class="o">-</span><span class="n">y</span><span class="w"> </span><span class="o">&amp;&amp;</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">upgrade</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">python3</span><span class="w"> </span><span class="n">python3</span><span class="o">-</span><span class="n">pip</span><span class="w"> </span><span class="n">cron</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">pip3</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">boto3</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span><span class="n">l</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">cat</span><span class="p">;</span><span class="w"> </span><span class="n">echo</span><span class="w"> </span><span class="s2">&quot;0 */6 * * * python3 /home/helper_scripts/download_duckdb.py&quot;</span><span class="p">;</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;bash&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;/home/entrypoint.sh&quot;</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And there we have it... an in memory containerised reporting solution with blazing fast capability to aggregate and build reports based on curated data direct from the business.. fully automated and deployable via CI/CD, that provides data updates daily.</p>
|
||||
<p>Now the embedded part.. which isn't built yet but I'll make sure to update you once we have/if we do because the architecture is very exciting for an embbdedded reporting workflow that is deployable via CI/CD processes to applications. As a little taster I'll point you to the <a href="https://www.metabase.com/learn/administration/git-based-workflow">metabase documentation</a>, the unfortunate thing about it is Metabase <em>have</em> hidden this behind the enterprise license.. but I can absolutely see why. If we get to implementing this I'll be sure to update you here on the learnings.</p>
|
||||
<p>Until then....</p></content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry><entry><title>Implementing Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html"><p>How Appflow simplified a major extract layer and when I choose Managed Services</p></summary><content type="html"><p>I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called <a href="https://aws.amazon.com/appflow/">Amazon Appflow</a>. This product <em>claimed</em> to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.</p>
|
||||
<p>This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. </p>
|
||||
<p>Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.</p>
|
||||
<h3>Datalake Extraction Layer</h3>
|
||||
<p>I often find that the flakiest part of any data solution, or at least a data solution that consumes data other applications create, is the extraction layer. If you are going to get a bug its going to be here, not always, but in my experience first port of call is... did it load :/ </p>
|
||||
<p>It is why I believe one of the most saturated parts of the enterprise data market is in fact the extraction layer. It seems every man and his dog (not to mention start up ) seems to be trying to "solve" this problem. The result is often that, as a data architect, you are spoilt for choice. BUT it seems that every different type of connection requires a different extractor, all for varying costs and with varying success. </p>
|
||||
<p>The RDBMS extraction space is largely solved, and there are products like <a href="https://www.qlik.com/us/products/qlik-replicate">Qlick replicate</a>, or <a href="https://aws.amazon.com/dms/">AWS DMS</a> as well as countless others that can do this at the CDC level and the work relatively well, albeit at a considerable cost. </p>
|
||||
<p>The API landscape for extraction is particularly saturated. I believe I saw on linkedin a graphic showing no less than 50 companies offering extraction from API endpoints, I'm not offey with all of them but they largely seem to <em>claim</em> to achieve the same goal, with varying levels of depth.</p>
|
||||
<p>This proliferation of API extractors obviously coinccides with the proliferation of SAAS products taking over from bespoke software that enterprises would have once ran with, hooked up to their existing enterprise DB's and used. This new landscape seems also shows that rather than an enterprise owning there data, they often need the skills, and increasingly $$$'s to access it.</p>
|
||||
<p>This complexity for access is normally coupled with poor documentation, where its a crapshoot as to whether there is an swaggerui, let alone useful API documentation (this is getting better though)</p>
|
||||
<h3>So why Managed for Extraction?</h3>
|
||||
<p>As you see above when you're extracting data it is so often a crapshoot and writing something bespoke is so incrediblly risky that the idea of it gives me hives. I could write a containerised python function for each of my API extractions, or a small batch loader for RDBMS myself and have a small cluster of these things extracting from tables and API endpoints but the thought of managing all of that, especially in a 1 man DataOps team is far to overwhelming.</p>
|
||||
<p>And Right there is my criteria for choosing a managed server.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p>Do I want to manage this myself?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is there any benefit to me managing this?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is it more cost effective to have someone else manage it?</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Invariably, the extraction layer, at least when answering the questions above, gives me the irks and I just decide to run with a simple managed service where I can point at the source and target click go and watch it go brrrrrrrrrrrrr</p>
|
||||
<p>When you couple ease of use with the relative reliability the value proposition of designing bespoke applications for the extraction task rapidly decreases, at least for me</p>
|
||||
<p>And this is why Extraction, at least in systems I design, is more often than not handled by a managed service, and why AppFlow, with the concept of a managed service for API calls to s3, was a cool tech I had to swing a chance to play with.</p>
|
||||
<h3>AppFlow, The Good, The Bad, The Ugly</h3>
|
||||
<p>Using AppFlow turned out to be a largely simple affair, even in Terraform, Once you have the correct Authentication tokens its more or less select the service you want and then create a "flow" for each endpoint. The complex part is the "Map_All" function for the endpoint. When triggered it automtically create a 1 - 1 mapping for all fields in the endpoint into the target file (in my case parquet) BUT this actually fundamentaly changes the flow you have created and thus causes terraform to shit the bed. This can be dealt with via a lifecycle rule, but means schema changes in the endpoint could cause issues in the future. </p>
|
||||
<p>All in All having a Managed Service to manage API endpoint extraction has been great and enabled the expansion of a datalake with no bespoke application code to manage the extraction of information from API endpoints which has proved to be a massive time and money saver overall</p>
|
||||
<p>I am yet to play with establishing a custom endpoint and it will be interesting to see just how much work this is compared with writing the code for a bespoke application... sounds like a good blog post if I get to do it one day.</p></content><category term="Data Engineering"></category><category term="data engineering"></category><category term="Amazon"></category><category term="Managed Services"></category></entry><entry><title>Dawn of another blog attempt</title><link href="http://localhost:8000/how-i-built-the-damn-thing.html" rel="alternate"></link><published>2023-05-10T20:00:00+10:00</published><updated>2023-05-10T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-10:/how-i-built-the-damn-thing.html</id><summary type="html"><p>Containers and How I take my learnings from home and apply them to work</p></summary><content type="html"><p>So, once again I'm trying this blog thing out. For the first time though I'm not going to make it niche, or cultral, but just whatever I feel like writing about. For a number of years now my day job has been in and around the world of data. Starting out as a "Workforce Analyst" (read downloading csv's of payroll data and making excel report) and over time moving to my current role where I build and design systems for ingesting data from various systems systems to allow analysts and Data Scientists. My hobby however has been... well.. tech. These two things have over time merged into the weirdness that is my professional life and I'd like to take elements of this life and share my learnings.</p>
|
||||
<p>The core reason for this is that I keep reading that its great to write. The other is I've decided that getting my thoughts into some form of order might be beneficial both to me and perhaps a wider audience. There are so many things I've attempted, succeeded and failed at, that, at the ver least, it will be worth getting them into a central repository of knowledge so that I, and maybe others, can share and use as time progresses. I also keep seeing on <a href="https://news.ycombinator.com">Hacker News</a> a lot of refernences to the guys who've been writing blogs since the early days of the internet and I want to contribute my little pie to what I want the internet to be</p>
|
||||
<p>So strap yourselves in as I take you on my data/self hosting journey, sprinkled with a little dev ops and data engineering to wet your appetite over the next little while. Sometimes I might even throw in some cultral or policitcal commentry just to keep things spicy!</p></content><category term="Data Engineering"></category><category term="data engineering"></category><category term="containers"></category></entry></feed>
|
@ -1,2 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<rss version="2.0"><channel><title>Andrew Ridgway's Blog - Andrew Ridgway</title><link>http://localhost:8000/</link><description></description><lastBuildDate>Wed, 24 Jul 2024 20:00:00 +1000</lastBuildDate><item><title>Building a 5 node Proxmox cluster!</title><link>http://localhost:8000/proxmox-cluster-1.html</link><description><p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p></description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Wed, 24 Jul 2024 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2024-07-24:/proxmox-cluster-1.html</guid><category>Server Architecture</category><category>proxmox</category><category>kubernetes</category><category>hardware</category></item><item><title>A Cover Letter</title><link>http://localhost:8000/cover-letter.html</link><description><p>A Summary of what I've done and Where I'd like to go for prospective Employers</p></description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Fri, 23 Feb 2024 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2024-02-23:/cover-letter.html</guid><category>Resume</category><category>Cover Letter</category><category>Resume</category></item><item><title>A Resume</title><link>http://localhost:8000/resume.html</link><description><p>A Summary of My work Experience</p></description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Fri, 23 Feb 2024 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2024-02-23:/resume.html</guid><category>Resume</category><category>Cover Letter</category><category>Resume</category></item><item><title>Metabase and DuckDB</title><link>http://localhost:8000/metabase-duckdb.html</link><description><p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p></description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Wed, 15 Nov 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-11-15:/metabase-duckdb.html</guid><category>Business Intelligence</category><category>data engineering</category><category>Metabase</category><category>DuckDB</category><category>embedded</category></item><item><title>Implementing Appflow in a Production Datalake</title><link>http://localhost:8000/appflow-production.html</link><description><p>How Appflow simplified a major extract layer and when I choose Managed Services</p></description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Tue, 23 May 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-05-23:/appflow-production.html</guid><category>Data Engineering</category><category>data engineering</category><category>Amazon</category><category>Managed Services</category></item><item><title>Dawn of another blog attempt</title><link>http://localhost:8000/how-i-built-the-damn-thing.html</link><description><p>Containers and How I take my learnings from home and apply them to work</p></description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Andrew Ridgway</dc:creator><pubDate>Wed, 10 May 2023 20:00:00 +1000</pubDate><guid isPermaLink="false">tag:localhost,2023-05-10:/how-i-built-the-damn-thing.html</guid><category>Data Engineering</category><category>data engineering</category><category>containers</category></item></channel></rss>
|
@ -1,75 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Business Intelligence</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/business-intelligence.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2023-11-15T20:00:00+10:00</updated><entry><title>Metabase and DuckDB</title><link href="http://localhost:8000/metabase-duckdb.html" rel="alternate"></link><published>2023-11-15T20:00:00+10:00</published><updated>2023-11-15T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-11-15:/metabase-duckdb.html</id><summary type="html"><p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p></summary><content type="html"><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p>
|
||||
<p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p>
|
||||
<h3>The Beginnings of an Idea</h3>
|
||||
<p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p>
|
||||
<p><img alt="Duckdb Architecture" height="auto" width="100%" src="http://localhost:8000/images/metabase_duckdb.png"></p>
|
||||
<p>But you'll notice this pretty glossed over line, "Connector", that right there is the clincher. So what is this "Connector"?. </p>
|
||||
<p>To Deep dive into this would take a whole blog so to give you something to quickly wrap your head around its the glue that will make metabase be able to query your data source. The reality is its a jdbc driver compiled against metabase. </p>
|
||||
<p>Thankfully Metabase point you to a <a href="https://github.com/AlexR2D2/metabase_duckdb_driver">community driver</a> for linking to duckdb ( hopefully it will be brought into metabase proper sooner rather than later ) </p>
|
||||
<p>Now the release of this driver is still compiled against 0.8 of duckdb and 0.9 is the latest stable but hopefully the <a href="https://github.com/AlexR2D2/metabase_duckdb_driver/pull/19">PR</a> for this will land very soon giving a good quick way to link to the latest and greatest in duckdb from metabase</p>
|
||||
<h3>But How do we get Data?</h3>
|
||||
<p>Brilliant, using the recomended DockerFile we can load up a metabase container with the duckdb driver pre built</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">46.2</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">AlexR2D2</span><span class="o">/</span><span class="n">metabase_duckdb_driver</span><span class="o">/</span><span class="n">releases</span><span class="o">/</span><span class="n">download</span><span class="o">/</span><span class="mf">0.1</span><span class="o">.</span><span class="mi">6</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;java&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;-jar&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;/home/metabase.jar&quot;</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>Great Now the big question. How do we get the data into the damn thing. Interestingly initially when I was designing this I had the thought of leveraging the in memory capabilities of duckdb and pulling in from the parquet on s3 directly as needed, after all the cluster is on AWS so the s3 API requests should be unbelievably fast anyway so why bother with a persistent database? </p>
|
||||
<p>Now that we have the default credentials chain it is trivial to call parquet from s3</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">SELECT</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="k">FROM</span><span class="w"> </span><span class="n">read_parquet</span><span class="p">(</span><span class="s1">&#39;s3://&lt;bucket&gt;/&lt;file&gt;&#39;</span><span class="p">);</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>However, if you're reading direct off parquet all of a sudden you need to consider the partioning and I also found out that, if the parquet is being actively written to at the time of quering, duckdb has a hissyfit about metadata not matching the query. Needless to say duckdb and streaming parquet are not happy bed fellows (<em>and frankly were not desined to be so this is ok</em>). And the idea of trying to explain all this to the run of the mill reporting analyst whom it is my hope is a business sort of person not tech honestly gave me hives.. so I had to make it easier</p>
|
||||
<p>The compromise occured to me... the curated layer is only built daily for reporting, and using that, I could create a duckdb file on disk that could be loaded into the metabase container itself.</p>
|
||||
<p>With some very simple python as an operation in our orchestrator I had a job that would read direct from our curated parquet and create a duckdb file with it.. without giving away to much the job primarily consisted of this </p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">def</span> <span class="nf">duckdb_builder</span><span class="p">(</span><span class="n">table</span><span class="p">):</span>
|
||||
<span class="n">conn</span> <span class="o">=</span> <span class="n">duckdb</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="s2">&quot;curated_duckdb.duckdb&quot;</span><span class="p">)</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;CALL load_aws_credentials(&#39;</span><span class="si">{</span><span class="n">aws_profile</span><span class="si">}</span><span class="s2">&#39;)&quot;</span><span class="p">)</span>
|
||||
<span class="c1">#This removes a lot of weirdass ANSI in logs you DO NOT WANT</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">execute</span><span class="p">(</span><span class="s2">&quot;PRAGMA enable_progress_bar=false&quot;</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Create </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> in duckdb&quot;</span><span class="p">)</span>
|
||||
<span class="n">sql</span> <span class="o">=</span> <span class="sa">f</span><span class="s2">&quot;CREATE OR REPLACE TABLE </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> AS SELECT * FROM read_parquet(&#39;s3://</span><span class="si">{</span><span class="n">curated_bucket</span><span class="si">}</span><span class="s2">/</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2">/*&#39;)&quot;</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="n">sql</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> Created&quot;</span><span class="p">)</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And then an upload to an s3 bucket</p>
|
||||
<p>This of course necessated a cron job baked in to the metabase container itself to actually pull the duckdb in every morning. After some carefuly analysis of time (because I'm do lazy to implement message queues) I set up a s3 cp job that could be cronned direct from the container itself. This gives us a self updating metabase container pulling with a duckdb backend for client facing reporting right in the interface. AND because of the fact the duckdb is baked right into the container... there are NO associated s3 or dpu costs (merely the cost of running a relatively large container)</p>
|
||||
<p>The final Dockerfile looks like this</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">47.6</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">mkdir</span><span class="w"> </span><span class="o">-</span><span class="n">p</span><span class="w"> </span><span class="o">/</span><span class="n">duckdb_data</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">entrypoint</span><span class="o">.</span><span class="n">sh</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">helper_scripts</span><span class="o">/</span><span class="n">download_duckdb</span><span class="o">.</span><span class="n">py</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">update</span><span class="w"> </span><span class="o">-</span><span class="n">y</span><span class="w"> </span><span class="o">&amp;&amp;</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">upgrade</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">python3</span><span class="w"> </span><span class="n">python3</span><span class="o">-</span><span class="n">pip</span><span class="w"> </span><span class="n">cron</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">pip3</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">boto3</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span><span class="n">l</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">cat</span><span class="p">;</span><span class="w"> </span><span class="n">echo</span><span class="w"> </span><span class="s2">&quot;0 */6 * * * python3 /home/helper_scripts/download_duckdb.py&quot;</span><span class="p">;</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">&quot;bash&quot;</span><span class="p">,</span><span class="w"> </span><span class="s2">&quot;/home/entrypoint.sh&quot;</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And there we have it... an in memory containerised reporting solution with blazing fast capability to aggregate and build reports based on curated data direct from the business.. fully automated and deployable via CI/CD, that provides data updates daily.</p>
|
||||
<p>Now the embedded part.. which isn't built yet but I'll make sure to update you once we have/if we do because the architecture is very exciting for an embbdedded reporting workflow that is deployable via CI/CD processes to applications. As a little taster I'll point you to the <a href="https://www.metabase.com/learn/administration/git-based-workflow">metabase documentation</a>, the unfortunate thing about it is Metabase <em>have</em> hidden this behind the enterprise license.. but I can absolutely see why. If we get to implementing this I'll be sure to update you here on the learnings.</p>
|
||||
<p>Until then....</p></content><category term="Business Intelligence"></category><category term="data engineering"></category><category term="Metabase"></category><category term="DuckDB"></category><category term="embedded"></category></entry></feed>
|
@ -1,2 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Data Analytics</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/data-analytics.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2023-07-13T20:00:00+10:00</updated><entry><title>Notebook or BI, What is the most appropiate communication medium</title><link href="http://localhost:8000/notebook-or-bi.html" rel="alternate"></link><published>2023-07-13T20:00:00+10:00</published><updated>2023-07-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-07-13:/notebook-or-bi.html</id><summary type="html"><p>When is a notebook enough or when do we need a dashboard</p></summary><content type="html"><p>I want to preface this post by saying I think "Dashboards" or "BI" as terms are wayyyyyyyyyyyyyyyyy over saturated in the market. There seems to be a belief that any question answerable in data deserves the work associated with a dashboard when in fact a simple one off report, or notebook, would be more than enough.</p></content><category term="Data Analytics"></category><category term="data engineering"></category><category term="Data Analytics"></category></entry></feed>
|
@ -1,34 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Data Engineering</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/data-engineering.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2023-05-23T20:00:00+10:00</updated><entry><title>Implementing Appflow in a Production Datalake</title><link href="http://localhost:8000/appflow-production.html" rel="alternate"></link><published>2023-05-23T20:00:00+10:00</published><updated>2023-05-17T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-23:/appflow-production.html</id><summary type="html"><p>How Appflow simplified a major extract layer and when I choose Managed Services</p></summary><content type="html"><p>I recently attended a meetup where there was a talk by an AWS spokesperson. Now don't get me wrong, I normally take these things with a grain of salt. At this talk there was this tiny tiny little segment about a product that AWS had released called <a href="https://aws.amazon.com/appflow/">Amazon Appflow</a>. This product <em>claimed</em> to be able to automate and make easy the link between different API endpoints, REST or otherwise and send that data to another point, whether that is Redshift, Aurora, a general relational db in RDS or otherwise or s3.</p>
|
||||
<p>This was particularly interesting to me because I had recently finished creating and s3 datalake in AWS for the company I work for. Today, I finally put my first Appflow integration to the Datalake into production and I have to say there are some rough edges to the deployment but it has been more or less as described on the box. </p>
|
||||
<p>Over the course of the next few paragraphs I'd like to explain the thinking I had as I investigated the product and then ultimately why I chose a managed service for this over implementing something myself in python using Dagster which I have also spun up within our cluster on AWS.</p>
|
||||
<h3>Datalake Extraction Layer</h3>
|
||||
<p>I often find that the flakiest part of any data solution, or at least a data solution that consumes data other applications create, is the extraction layer. If you are going to get a bug its going to be here, not always, but in my experience first port of call is... did it load :/ </p>
|
||||
<p>It is why I believe one of the most saturated parts of the enterprise data market is in fact the extraction layer. It seems every man and his dog (not to mention start up ) seems to be trying to "solve" this problem. The result is often that, as a data architect, you are spoilt for choice. BUT it seems that every different type of connection requires a different extractor, all for varying costs and with varying success. </p>
|
||||
<p>The RDBMS extraction space is largely solved, and there are products like <a href="https://www.qlik.com/us/products/qlik-replicate">Qlick replicate</a>, or <a href="https://aws.amazon.com/dms/">AWS DMS</a> as well as countless others that can do this at the CDC level and the work relatively well, albeit at a considerable cost. </p>
|
||||
<p>The API landscape for extraction is particularly saturated. I believe I saw on linkedin a graphic showing no less than 50 companies offering extraction from API endpoints, I'm not offey with all of them but they largely seem to <em>claim</em> to achieve the same goal, with varying levels of depth.</p>
|
||||
<p>This proliferation of API extractors obviously coinccides with the proliferation of SAAS products taking over from bespoke software that enterprises would have once ran with, hooked up to their existing enterprise DB's and used. This new landscape seems also shows that rather than an enterprise owning there data, they often need the skills, and increasingly $$$'s to access it.</p>
|
||||
<p>This complexity for access is normally coupled with poor documentation, where its a crapshoot as to whether there is an swaggerui, let alone useful API documentation (this is getting better though)</p>
|
||||
<h3>So why Managed for Extraction?</h3>
|
||||
<p>As you see above when you're extracting data it is so often a crapshoot and writing something bespoke is so incrediblly risky that the idea of it gives me hives. I could write a containerised python function for each of my API extractions, or a small batch loader for RDBMS myself and have a small cluster of these things extracting from tables and API endpoints but the thought of managing all of that, especially in a 1 man DataOps team is far to overwhelming.</p>
|
||||
<p>And Right there is my criteria for choosing a managed server.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p>Do I want to manage this myself?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is there any benefit to me managing this?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Is it more cost effective to have someone else manage it?</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Invariably, the extraction layer, at least when answering the questions above, gives me the irks and I just decide to run with a simple managed service where I can point at the source and target click go and watch it go brrrrrrrrrrrrr</p>
|
||||
<p>When you couple ease of use with the relative reliability the value proposition of designing bespoke applications for the extraction task rapidly decreases, at least for me</p>
|
||||
<p>And this is why Extraction, at least in systems I design, is more often than not handled by a managed service, and why AppFlow, with the concept of a managed service for API calls to s3, was a cool tech I had to swing a chance to play with.</p>
|
||||
<h3>AppFlow, The Good, The Bad, The Ugly</h3>
|
||||
<p>Using AppFlow turned out to be a largely simple affair, even in Terraform, Once you have the correct Authentication tokens its more or less select the service you want and then create a "flow" for each endpoint. The complex part is the "Map_All" function for the endpoint. When triggered it automtically create a 1 - 1 mapping for all fields in the endpoint into the target file (in my case parquet) BUT this actually fundamentaly changes the flow you have created and thus causes terraform to shit the bed. This can be dealt with via a lifecycle rule, but means schema changes in the endpoint could cause issues in the future. </p>
|
||||
<p>All in All having a Managed Service to manage API endpoint extraction has been great and enabled the expansion of a datalake with no bespoke application code to manage the extraction of information from API endpoints which has proved to be a massive time and money saver overall</p>
|
||||
<p>I am yet to play with establishing a custom endpoint and it will be interesting to see just how much work this is compared with writing the code for a bespoke application... sounds like a good blog post if I get to do it one day.</p></content><category term="Data Engineering"></category><category term="data engineering"></category><category term="Amazon"></category><category term="Managed Services"></category></entry><entry><title>Dawn of another blog attempt</title><link href="http://localhost:8000/how-i-built-the-damn-thing.html" rel="alternate"></link><published>2023-05-10T20:00:00+10:00</published><updated>2023-05-10T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2023-05-10:/how-i-built-the-damn-thing.html</id><summary type="html"><p>Containers and How I take my learnings from home and apply them to work</p></summary><content type="html"><p>So, once again I'm trying this blog thing out. For the first time though I'm not going to make it niche, or cultral, but just whatever I feel like writing about. For a number of years now my day job has been in and around the world of data. Starting out as a "Workforce Analyst" (read downloading csv's of payroll data and making excel report) and over time moving to my current role where I build and design systems for ingesting data from various systems systems to allow analysts and Data Scientists. My hobby however has been... well.. tech. These two things have over time merged into the weirdness that is my professional life and I'd like to take elements of this life and share my learnings.</p>
|
||||
<p>The core reason for this is that I keep reading that its great to write. The other is I've decided that getting my thoughts into some form of order might be beneficial both to me and perhaps a wider audience. There are so many things I've attempted, succeeded and failed at, that, at the ver least, it will be worth getting them into a central repository of knowledge so that I, and maybe others, can share and use as time progresses. I also keep seeing on <a href="https://news.ycombinator.com">Hacker News</a> a lot of refernences to the guys who've been writing blogs since the early days of the internet and I want to contribute my little pie to what I want the internet to be</p>
|
||||
<p>So strap yourselves in as I take you on my data/self hosting journey, sprinkled with a little dev ops and data engineering to wet your appetite over the next little while. Sometimes I might even throw in some cultral or policitcal commentry just to keep things spicy!</p></content><category term="Data Engineering"></category><category term="data engineering"></category><category term="containers"></category></entry></feed>
|
@ -1,13 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>A Ridgway Musings - How To</title><link href="http://blog.aridgwayweb.com/" rel="alternate"></link><link href="http://blog.aridgwayweb.com/feeds/how-to.atom.xml" rel="self"></link><id>http://blog.aridgwayweb.com/</id><updated>2021-09-18T10:00:00+10:00</updated><subtitle></subtitle><entry><title>A New Way To Build A Free Blog</title><link href="http://blog.aridgwayweb.com/how-i-built-the-damn-thing.html" rel="alternate"></link><published>2021-09-18T10:00:00+10:00</published><updated>2021-09-18T10:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:blog.aridgwayweb.com,2021-09-18:/how-i-built-the-damn-thing.html</id><summary type="html"><p>How I built this blog or doing stuff on the cheap!</p></summary><content type="html"><p>Recently in conversation someone mentioned that github pages was a way to fire up a blog, set up a repo of a certain name under your user with the usual format (i.e an index.html) and <em>poof</em> you now have a website capable of doing anything you need a website to do... free! </p>
|
||||
<p>Of course this is only to a point. A blog for example is simple a bunch of static pages with some JS, CSS and HTML easy peasy you're good to go. You want a full LAMP stack and complete server side control.. well... this solution is probably not for you. </p>
|
||||
<p>What I wanted to write in as my first post though was how I set this little corner of the web up. It was fun, quick, and REALLY easy to do and I thought I'd share how I did it.</p>
|
||||
<h2>What You'll Need</h2>
|
||||
<p>For this particular set up I am standing on the back of several technologies.
|
||||
1. <a href="https://git-scm.com/">Git</a>
|
||||
2. <a href="https://github.com/">Github</a> (no way out of this unfortunately)
|
||||
3. <a href="https://www.python.org/">Python</a> (I'm using 3.8 but I'm sure most 3+ version will work)
|
||||
4. <a href="https://www.gnu.org/software/bash/">Bash</a> (I build this on my linux laptop but WSL or MacOS should work more or less the same)
|
||||
5. <a href="https://github.com/getpelican/pelican">Pelican</a></p>
|
||||
<p>I won't go through how to install these as those links have much more thorough documentation I could ever provide. So from here on out I will assume you have configured and installed all the prereqs!</p>
|
||||
<p>...To Be Continued</p></content><category term="How To"></category><category term="pelican"></category><category term="publishing"></category><category term="github pages"></category></entry></feed>
|
@ -1,143 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Resume</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/resume.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-03-13T20:00:00+10:00</updated><entry><title>A Cover Letter</title><link href="http://localhost:8000/cover-letter.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/cover-letter.html</id><summary type="html"><p>A Summary of what I've done and Where I'd like to go for prospective Employers</p></summary><content type="html"><p>To whom it may concern</p>
|
||||
<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p>
|
||||
<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p>
|
||||
<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p>
|
||||
<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p>
|
||||
<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p>
|
||||
<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here:</p>
|
||||
<p><a href="https://github.com/armistace/gpt-sql-generator">gpt-sql-generator</a></p>
|
||||
<p><a href="[https://github.com/armistace/datahub_dbt_sources_generator">dbt_sources_generator</a></p>
|
||||
<p>I look forward to hearing from you soon.</p>
|
||||
<p>Sincerely,</p>
|
||||
<hr>
|
||||
<p>Andrew Ridgway</p></content><category term="Resume"></category><category term="Cover Letter"></category><category term="Resume"></category></entry><entry><title>A Resume</title><link href="http://localhost:8000/resume.html" rel="alternate"></link><published>2024-02-23T20:00:00+10:00</published><updated>2024-03-13T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-02-23:/resume.html</id><summary type="html"><p>A Summary of My work Experience</p></summary><content type="html"><h1>OVERVIEW</h1>
|
||||
<p>I am a Senior Data Engineer looking to transition my skills to Data and Solution
|
||||
Architecting as well as project management. I have spent the better part of the
|
||||
last decade refining my abilities in taking business requirements and turning
|
||||
those into actionable data engineering, analytics, and software projects with
|
||||
trackable metrics. I believe in agnosticism when it comes to coding languages
|
||||
and have experimented in my own time with many different languages. In my
|
||||
career I have used Python, .NET, PowerShell, TSQL, VB and SAS (multiple
|
||||
products) in an Enterprise capacity. I also have experience using Google Cloud
|
||||
Platform and AWS tools for ETL and data platform development as well as git
|
||||
for version control and deployment using various IAC tools. I have also
|
||||
conducted data analysis and modelling on business metrics to find relationships
|
||||
between both staff and customer behavior and produced actionable
|
||||
recommendations based on the conclusions. In a private context I have also
|
||||
experimented with C, C# and Kotlin I am looking to further my career by taking
|
||||
my passion for data engineering and analysis as well as web and software
|
||||
development and applying it in a strategic context.</p>
|
||||
<h1>SKILLS &amp; ABILITIES</h1>
|
||||
<ul>
|
||||
<li>Python (scripting, compiling, notebooks – Sagemaker, Jupyter)</li>
|
||||
<li>git</li>
|
||||
<li>SAS (Base, EG, VA)</li>
|
||||
<li>Various Google Cloud Tools (Data Fusion, Compute Engine, Cloud Functions)</li>
|
||||
<li>Various Amazon Tools (EC2, RDS, Kinesis, Glue, Redshift, Lambda, ECS, ECR, EKS)</li>
|
||||
<li>Streaming Technologies (Kafka, Hive, Spark Streaming)</li>
|
||||
<li>Various DB platforms both on Prem and Serverless (MariaDB/MySql,</li>
|
||||
<li>Postgres/Redshift, SQL Server, RDS/Aurora variants)</li>
|
||||
<li>Various Microsoft Products (PowerBI, TSQL, Excel, VBA)</li>
|
||||
<li>Linux Server Administration (cron, bash, systemD)</li>
|
||||
<li>ETL/ELT Development</li>
|
||||
<li>Basic Data Modelling (Kimball, SCD Type 2)</li>
|
||||
<li>IAC (Cloud Formation, Terraform)</li>
|
||||
<li>Datahub Deployment</li>
|
||||
<li>Dagster Orchestration Deployments</li>
|
||||
<li>DBT Modelling and Design Deployments</li>
|
||||
<li>Containerised and Cloud Driven Data Architecture</li>
|
||||
</ul>
|
||||
<h1>EXPERIENCE</h1>
|
||||
<h2>Cloud Data Architect</h2>
|
||||
<h3><em>Redeye Apps</em></h3>
|
||||
<h4><em>May 2022 - Present</em></h4>
|
||||
<ul>
|
||||
<li>Greenfields Research, Design and Deployment of S3 datalake (Parquet)</li>
|
||||
<li>AWS DMS, S3, Athena, Glue</li>
|
||||
<li>Research Design and Deployment of Catalog (Datahub)</li>
|
||||
<li>Design of Data Governance Process (Datahub driven)</li>
|
||||
<li>Research Design and Deployment of Orchestration and Modelling for Transforms (Dagster/DBT into Mesos)</li>
|
||||
<li>CI/CD design and deployment of modelling and orchestration using Gitlab</li>
|
||||
<li>Research, Design and Deployment of ML Ops Dev pipelines anddeployment strategy</li>
|
||||
<li>Design of ETL/Pipelines (DBT)</li>
|
||||
<li>Design of Customer Facing Data Products and deployment methodologies (Fully automated via Kakfa/Dagster/DBT)</li>
|
||||
</ul>
|
||||
<h2>Data Engineer,</h2>
|
||||
<h3><em>TechConnect IT Solutions</em></h3>
|
||||
<h4><em>August 2021 – May 2022</em></h4>
|
||||
<ul>
|
||||
<li>Design of Cloud Data Batch ETL solutions using Python (Glue)</li>
|
||||
<li>Design of Cloud Data Streaming ETL solution using Python (Kinesis)</li>
|
||||
<li>Solve complex client business problems using software to join and transform data from DB’s, Web API’s, Application API’s and System logs</li>
|
||||
<li>Build CI/CD pipelines to ensure smooth deployments (Bitbucket, gitlab)</li>
|
||||
<li>Apply Prebuilt ML models to software solutions (Sagemaker)</li>
|
||||
<li>Assist with the architecting of Containerisation solutions (Docker, ECS, ECR)</li>
|
||||
<li>API testing and development (gRPC, Rest)</li>
|
||||
</ul>
|
||||
<h2>Enterprise Data Warehouse Developer</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>August 2019 - August 2021</em></h4>
|
||||
<ul>
|
||||
<li>ETL development of CRM, WFP, Outbound Dialer, Inbound switch in Google Cloud, SAS, TSQL</li>
|
||||
<li>Bringing new data to the business to analyse for new insights</li>
|
||||
<li>Redeveloped Version Control and brought git to the data team</li>
|
||||
<li>Introduced python for API enablement in the Enterprise Data Warehouse</li>
|
||||
<li>Partnering with the business to focus data project on actual need and translating into technical requirements</li>
|
||||
</ul>
|
||||
<h2>Business Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2018 - August 2019</em></h4>
|
||||
<ul>
|
||||
<li>Automate Service Performance Reporting using PowerShell/VBA/SAS</li>
|
||||
<li>Learn and leverage SAS EG and VA to streamline Microsoft Excel Reporting</li>
|
||||
<li>Identify and develop data pipelines to source data from multiple sources easily and collate into a single source to identify relationships and trends</li>
|
||||
<li>Technologies used include VBA, PowerShell, SQL, Web API’s, SAS</li>
|
||||
<li>Where SAS is inappropriate use VBA to automate processes in Microsoft Access and Excel</li>
|
||||
<li>Gather Requirements to build meaningful reporting solutions</li>
|
||||
<li>Provide meaningful analysis on business performance and provide relevant presentations and reports to senior stakeholders.</li>
|
||||
</ul>
|
||||
<h2>Forecasting and Capacity Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2017 – January 2018</em></h4>
|
||||
<ul>
|
||||
<li>Develop the outbound forecasting model for the Auto and General sales call center by analysing the relationship between customer decisions and workload drivers</li>
|
||||
<li>This includes the complete data pipeline for the model from identifying and sourcing data, building the reporting and analysing the data and associated drivers.</li>
|
||||
<li>Forecast inbound workload requirements for the Auto and General sales call center using time series analysis</li>
|
||||
<li>Learn and leverage the Aspect Workforce Management System to ensure efficiency of forecast generation</li>
|
||||
<li>Learn and leverage the capabilities of SAS Enterprise Guide to improve accuracy</li>
|
||||
<li>Liaise with people across the business to ensure meaningful, accurate analysis is provided to senior stakeholders</li>
|
||||
<li>Analyse monthly, weekly and intraday requirements and ensure forecast is accurately predicting workload for breaks, meetings and Leave</li>
|
||||
</ul>
|
||||
<h2>Senior HR Performance Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>June 2016 - January 2017</em></h4>
|
||||
<ul>
|
||||
<li>Harmonise various systems to develop a unified workforce reporting and analysis framework with appropriate metrics</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h2>Workforce Business Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>July 2015 – June 2016</em></h4>
|
||||
<ul>
|
||||
<li>Develop and refine current workforce analysis techniques and databases</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Act as liaison between shared service providers and executives and facilitate communication during the implementation of a payroll leave audit</li>
|
||||
<li>Gather reporting requirements from various business areas and produce ad-hoc and regular reports as required</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h1>EDUCATION</h1>
|
||||
<ul>
|
||||
<li>2011 Bachelor of Business Management, University of Queensland</li>
|
||||
<li>2008 Bachelor of Arts, University of Queensland</li>
|
||||
</ul>
|
||||
<h1>REFERENCES</h1>
|
||||
<ul>
|
||||
<li>Anthony Stiller Lead Developer, Data warehousing, Queensland Health</li>
|
||||
</ul>
|
||||
<p><em>0428 038 031</em></p>
|
||||
<ul>
|
||||
<li>Jaime Brian Head of Cloud Ninjas, TechConnect</li>
|
||||
</ul>
|
||||
<p><em>0422 012 17</em></p></content><category term="Resume"></category><category term="Cover Letter"></category><category term="Resume"></category></entry></feed>
|
@ -1,153 +0,0 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom"><title>Andrew Ridgway's Blog - Server Architecture</title><link href="http://localhost:8000/" rel="alternate"></link><link href="http://localhost:8000/feeds/server-architecture.atom.xml" rel="self"></link><id>http://localhost:8000/</id><updated>2024-07-24T20:00:00+10:00</updated><entry><title>Building a 5 node Proxmox cluster!</title><link href="http://localhost:8000/proxmox-cluster-1.html" rel="alternate"></link><published>2024-07-24T20:00:00+10:00</published><updated>2024-07-24T20:00:00+10:00</updated><author><name>Andrew Ridgway</name></author><id>tag:localhost,2024-07-24:/proxmox-cluster-1.html</id><summary type="html"><p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p></summary><content type="html"><h4>A quick summary of this post by AI</h4>
|
||||
<p>I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) </p>
|
||||
<p><strong>Summary:</strong></p>
|
||||
<p>Quick look at some of the things I've used Proxmox fr</p>
|
||||
<ul>
|
||||
<li>I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.</li>
|
||||
<li>I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.</li>
|
||||
<li>My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.</li>
|
||||
</ul>
|
||||
<p>As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future</p>
|
||||
<p><strong>Follow-up Questions:</strong></p>
|
||||
<ol>
|
||||
<li><strong>Kubernetes Cluster:</strong></li>
|
||||
<li>What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?</li>
|
||||
<li>
|
||||
<p>How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>CI/CD with Gitea:</strong></p>
|
||||
</li>
|
||||
<li>Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?</li>
|
||||
<li>
|
||||
<p>What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Monitoring and Logging:</strong></p>
|
||||
</li>
|
||||
<li>How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?</li>
|
||||
<li>
|
||||
<p>How do you ensure the security and privacy of your data while utilizing these tools?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Future Plans:</strong></p>
|
||||
</li>
|
||||
<li>You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?</li>
|
||||
<li>Are there any other new technologies or projects you're considering for your homelab in the near future?</li>
|
||||
</ol>
|
||||
<h2>A Picture is worth a thousand words</h2>
|
||||
<p><img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"></p>
|
||||
<p><em>Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on</em></p>
|
||||
<p><img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"></p>
|
||||
<h2>The idea</h2>
|
||||
<p>For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!</p>
|
||||
<p>I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.</p>
|
||||
<p>All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)</p>
|
||||
<h3>The platform for the Idea!</h3>
|
||||
<p>After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.</p>
|
||||
<p>First lets define what on earth Proxmox is</p>
|
||||
<h4>Proxmox</h4>
|
||||
<p>Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:</p>
|
||||
<ol>
|
||||
<li><strong>Simultaneous Management of LXC Containers and VMs:</strong>
|
||||
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.</li>
|
||||
</ol>
|
||||
<p>For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Efficient Resource Allocation:</strong>
|
||||
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Live Migration:</strong>
|
||||
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>High Availability:</strong>
|
||||
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Open-Source and Free:</strong>
|
||||
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.</p>
|
||||
<p><strong>Relevant Links:</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>Official Proxmox VE website: <a href="https://www.proxmox.com/">https://www.proxmox.com/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE documentation: <a href="https://pve-proxmox-community.org/">https://pve-proxmox-community.org/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE forums: <a href="https://forum.proxmox.com/">https://forum.proxmox.com/</a></p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>I'd like to thank the mistral-nemo LLM for writing that ;) </p>
|
||||
<h3>LXC's</h3>
|
||||
<p>To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.</p>
|
||||
<p><strong>Isolation Level</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>System Call Filtering</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Resource Management</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker enforces strict resource limits on every container by default.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Networking</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like <code>ip</code> or <code>bridge-utils</code>.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>What LXC is Focused On:</p>
|
||||
<p>Given these differences, here's what LXC primarily focuses on:</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Simplicity and Lightweightness</strong>: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Control and Flexibility</strong>: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Integration with Traditional Linux Tools</strong>: Since LXC uses standard Linux tools for networking (like <code>ip</code> and <code>bridge-utils</code>) and does not add its own layer, it integrates well with traditional Linux systems administration practices.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Use Cases Where Fine-grained Control is Required</strong>: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.</p>
|
||||
<p>Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!</p>
|
||||
<p>The LXC's have also been super easy to set up with the help of ttecks helper scripts <a href="https://community-scripts.github.io/Proxmox/">Proxmox Helper Scripts</a> It was very sad to hear he had gotten <a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/">sick</a> and I realy hope he gets well soon!</p>
|
||||
<h3>VM's</h3>
|
||||
<p>Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!</p>
|
||||
<p>Again I'd like to thank mistral-nemo for that very informative piece of prose ;) </p>
|
||||
<p>The big question here is what do I use the VM capablity of Proxmox for?</p>
|
||||
<p>I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)</p>
|
||||
<p>I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system</p>
|
||||
<p>Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. </p>
|
||||
<p>Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)</p></content><category term="Server Architecture"></category><category term="proxmox"></category><category term="kubernetes"></category><category term="hardware"></category></entry></feed>
|
@ -1,153 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/data-engineering.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
|
||||
|
||||
|
||||
<meta name="tags" contents="data engineering" />
|
||||
<meta name="tags" contents="containers" />
|
||||
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
|
||||
<meta property="og:type" content="article">
|
||||
<meta property="article:author" content="">
|
||||
<meta property="og:url" content="http://localhost:8000/how-i-built-the-damn-thing.html">
|
||||
<meta property="og:title" content="Dawn of another blog attempt">
|
||||
<meta property="og:description" content="">
|
||||
<meta property="og:image" content="http://localhost:8000/">
|
||||
<meta property="article:published_time" content="2023-05-10 20:00:00+10:00">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('http://localhost:8000/theme/images/post-bg.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Dawn of another blog attempt</h1>
|
||||
<span class="meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 10 May 2023
|
||||
</span>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<!-- Post Content -->
|
||||
<article>
|
||||
<p>So, once again I'm trying this blog thing out. For the first time though I'm not going to make it niche, or cultral, but just whatever I feel like writing about. For a number of years now my day job has been in and around the world of data. Starting out as a "Workforce Analyst" (read downloading csv's of payroll data and making excel report) and over time moving to my current role where I build and design systems for ingesting data from various systems systems to allow analysts and Data Scientists. My hobby however has been... well.. tech. These two things have over time merged into the weirdness that is my professional life and I'd like to take elements of this life and share my learnings.</p>
|
||||
<p>The core reason for this is that I keep reading that its great to write. The other is I've decided that getting my thoughts into some form of order might be beneficial both to me and perhaps a wider audience. There are so many things I've attempted, succeeded and failed at, that, at the ver least, it will be worth getting them into a central repository of knowledge so that I, and maybe others, can share and use as time progresses. I also keep seeing on <a href="https://news.ycombinator.com">Hacker News</a> a lot of refernences to the guys who've been writing blogs since the early days of the internet and I want to contribute my little pie to what I want the internet to be</p>
|
||||
<p>So strap yourselves in as I take you on my data/self hosting journey, sprinkled with a little dev ops and data engineering to wet your appetite over the next little while. Sometimes I might even throw in some cultral or policitcal commentry just to keep things spicy!</p>
|
||||
</article>
|
||||
|
||||
<hr>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
Before Width: | Height: | Size: 423 KiB |
Before Width: | Height: | Size: 1.3 MiB |
Before Width: | Height: | Size: 97 KiB |
Before Width: | Height: | Size: 146 KiB |
Before Width: | Height: | Size: 2.4 MiB |
@ -1,212 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<!-- Set your background image for this header on the line below. -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="page-heading">
|
||||
<h1>Andrew Ridgway's Blog</h1>
|
||||
<hr class="small">
|
||||
<span class="subheading"></span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/proxmox-cluster-1.html" rel="bookmark" title="Permalink to Building a 5 node Proxmox cluster!">
|
||||
<h2 class="post-title">
|
||||
Building a 5 node Proxmox cluster!
|
||||
</h2>
|
||||
</a>
|
||||
<p>Upgrade from a small docker-compose style server to full proxmox server with kubernetes, LXC, and a hypervisor</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 24 July 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/cover-letter.html" rel="bookmark" title="Permalink to A Cover Letter">
|
||||
<h2 class="post-title">
|
||||
A Cover Letter
|
||||
</h2>
|
||||
</a>
|
||||
<p>A Summary of what I've done and Where I'd like to go for prospective Employers</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Fri 23 February 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/resume.html" rel="bookmark" title="Permalink to A Resume">
|
||||
<h2 class="post-title">
|
||||
A Resume
|
||||
</h2>
|
||||
</a>
|
||||
<p>A Summary of My work Experience</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Fri 23 February 2024
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/metabase-duckdb.html" rel="bookmark" title="Permalink to Metabase and DuckDB">
|
||||
<h2 class="post-title">
|
||||
Metabase and DuckDB
|
||||
</h2>
|
||||
</a>
|
||||
<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 15 November 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/appflow-production.html" rel="bookmark" title="Permalink to Implementing Appflow in a Production Datalake">
|
||||
<h2 class="post-title">
|
||||
Implementing Appflow in a Production Datalake
|
||||
</h2>
|
||||
</a>
|
||||
<p>How Appflow simplified a major extract layer and when I choose Managed Services</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Tue 23 May 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<div class="post-preview">
|
||||
<a href="http://localhost:8000/how-i-built-the-damn-thing.html" rel="bookmark" title="Permalink to Dawn of another blog attempt">
|
||||
<h2 class="post-title">
|
||||
Dawn of another blog attempt
|
||||
</h2>
|
||||
</a>
|
||||
<p>Containers and How I take my learnings from home and apply them to work</p>
|
||||
<p class="post-meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 10 May 2023
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<!-- Pager -->
|
||||
<ul class="pager">
|
||||
<li class="next">
|
||||
</li>
|
||||
</ul>
|
||||
Page 1 / 1
|
||||
<hr>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,226 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/business-intelligence.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
|
||||
|
||||
|
||||
<meta name="tags" contents="data engineering" />
|
||||
<meta name="tags" contents="Metabase" />
|
||||
<meta name="tags" contents="DuckDB" />
|
||||
<meta name="tags" contents="embedded" />
|
||||
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
|
||||
<meta property="og:type" content="article">
|
||||
<meta property="article:author" content="">
|
||||
<meta property="og:url" content="http://localhost:8000/metabase-duckdb.html">
|
||||
<meta property="og:title" content="Metabase and DuckDB">
|
||||
<meta property="og:description" content="">
|
||||
<meta property="og:image" content="http://localhost:8000/">
|
||||
<meta property="article:published_time" content="2023-11-15 20:00:00+10:00">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('http://localhost:8000/theme/images/post-bg.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Metabase and DuckDB</h1>
|
||||
<span class="meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 15 November 2023
|
||||
</span>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<!-- Post Content -->
|
||||
<article>
|
||||
<p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p>
|
||||
<p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p>
|
||||
<h3>The Beginnings of an Idea</h3>
|
||||
<p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p>
|
||||
<p><img alt="Duckdb Architecture" height="auto" width="100%" src="http://localhost:8000/images/metabase_duckdb.png"></p>
|
||||
<p>But you'll notice this pretty glossed over line, "Connector", that right there is the clincher. So what is this "Connector"?. </p>
|
||||
<p>To Deep dive into this would take a whole blog so to give you something to quickly wrap your head around its the glue that will make metabase be able to query your data source. The reality is its a jdbc driver compiled against metabase. </p>
|
||||
<p>Thankfully Metabase point you to a <a href="https://github.com/AlexR2D2/metabase_duckdb_driver">community driver</a> for linking to duckdb ( hopefully it will be brought into metabase proper sooner rather than later ) </p>
|
||||
<p>Now the release of this driver is still compiled against 0.8 of duckdb and 0.9 is the latest stable but hopefully the <a href="https://github.com/AlexR2D2/metabase_duckdb_driver/pull/19">PR</a> for this will land very soon giving a good quick way to link to the latest and greatest in duckdb from metabase</p>
|
||||
<h3>But How do we get Data?</h3>
|
||||
<p>Brilliant, using the recomended DockerFile we can load up a metabase container with the duckdb driver pre built</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">46.2</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">github</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">AlexR2D2</span><span class="o">/</span><span class="n">metabase_duckdb_driver</span><span class="o">/</span><span class="n">releases</span><span class="o">/</span><span class="n">download</span><span class="o">/</span><span class="mf">0.1</span><span class="o">.</span><span class="mi">6</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">"java"</span><span class="p">,</span><span class="w"> </span><span class="s2">"-jar"</span><span class="p">,</span><span class="w"> </span><span class="s2">"/home/metabase.jar"</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>Great Now the big question. How do we get the data into the damn thing. Interestingly initially when I was designing this I had the thought of leveraging the in memory capabilities of duckdb and pulling in from the parquet on s3 directly as needed, after all the cluster is on AWS so the s3 API requests should be unbelievably fast anyway so why bother with a persistent database? </p>
|
||||
<p>Now that we have the default credentials chain it is trivial to call parquet from s3</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">SELECT</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="k">FROM</span><span class="w"> </span><span class="n">read_parquet</span><span class="p">(</span><span class="s1">'s3://<bucket>/<file>'</span><span class="p">);</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>However, if you're reading direct off parquet all of a sudden you need to consider the partioning and I also found out that, if the parquet is being actively written to at the time of quering, duckdb has a hissyfit about metadata not matching the query. Needless to say duckdb and streaming parquet are not happy bed fellows (<em>and frankly were not desined to be so this is ok</em>). And the idea of trying to explain all this to the run of the mill reporting analyst whom it is my hope is a business sort of person not tech honestly gave me hives.. so I had to make it easier</p>
|
||||
<p>The compromise occured to me... the curated layer is only built daily for reporting, and using that, I could create a duckdb file on disk that could be loaded into the metabase container itself.</p>
|
||||
<p>With some very simple python as an operation in our orchestrator I had a job that would read direct from our curated parquet and create a duckdb file with it.. without giving away to much the job primarily consisted of this </p>
|
||||
<div class="highlight"><pre><span></span><code><span class="k">def</span> <span class="nf">duckdb_builder</span><span class="p">(</span><span class="n">table</span><span class="p">):</span>
|
||||
<span class="n">conn</span> <span class="o">=</span> <span class="n">duckdb</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="s2">"curated_duckdb.duckdb"</span><span class="p">)</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="sa">f</span><span class="s2">"CALL load_aws_credentials('</span><span class="si">{</span><span class="n">aws_profile</span><span class="si">}</span><span class="s2">')"</span><span class="p">)</span>
|
||||
<span class="c1">#This removes a lot of weirdass ANSI in logs you DO NOT WANT</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">execute</span><span class="p">(</span><span class="s2">"PRAGMA enable_progress_bar=false"</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Create </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> in duckdb"</span><span class="p">)</span>
|
||||
<span class="n">sql</span> <span class="o">=</span> <span class="sa">f</span><span class="s2">"CREATE OR REPLACE TABLE </span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> AS SELECT * FROM read_parquet('s3://</span><span class="si">{</span><span class="n">curated_bucket</span><span class="si">}</span><span class="s2">/</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2">/*')"</span>
|
||||
<span class="n">conn</span><span class="o">.</span><span class="n">sql</span><span class="p">(</span><span class="n">sql</span><span class="p">)</span>
|
||||
<span class="n">log</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="sa">f</span><span class="s2">"</span><span class="si">{</span><span class="n">table</span><span class="si">}</span><span class="s2"> Created"</span><span class="p">)</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And then an upload to an s3 bucket</p>
|
||||
<p>This of course necessated a cron job baked in to the metabase container itself to actually pull the duckdb in every morning. After some carefuly analysis of time (because I'm do lazy to implement message queues) I set up a s3 cp job that could be cronned direct from the container itself. This gives us a self updating metabase container pulling with a duckdb backend for client facing reporting right in the interface. AND because of the fact the duckdb is baked right into the container... there are NO associated s3 or dpu costs (merely the cost of running a relatively large container)</p>
|
||||
<p>The final Dockerfile looks like this</p>
|
||||
<div class="highlight"><pre><span></span><code><span class="n">FROM</span><span class="w"> </span><span class="n">openjdk</span><span class="p">:</span><span class="mi">19</span><span class="o">-</span><span class="n">buster</span>
|
||||
|
||||
<span class="n">ENV</span><span class="w"> </span><span class="n">MB_PLUGINS_DIR</span><span class="o">=/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="n">downloads</span><span class="o">.</span><span class="n">metabase</span><span class="o">.</span><span class="n">com</span><span class="o">/</span><span class="n">v0</span><span class="o">.</span><span class="mf">47.6</span><span class="o">/</span><span class="n">metabase</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
<span class="n">ADD</span><span class="w"> </span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">chmod</span><span class="w"> </span><span class="mi">744</span><span class="w"> </span><span class="o">/</span><span class="n">home</span><span class="o">/</span><span class="n">plugins</span><span class="o">/</span><span class="n">duckdb</span><span class="o">.</span><span class="n">metabase</span><span class="o">-</span><span class="n">driver</span><span class="o">.</span><span class="n">jar</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">mkdir</span><span class="w"> </span><span class="o">-</span><span class="n">p</span><span class="w"> </span><span class="o">/</span><span class="n">duckdb_data</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">entrypoint</span><span class="o">.</span><span class="n">sh</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">COPY</span><span class="w"> </span><span class="n">helper_scripts</span><span class="o">/</span><span class="n">download_duckdb</span><span class="o">.</span><span class="n">py</span><span class="w"> </span><span class="o">/</span><span class="n">home</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">update</span><span class="w"> </span><span class="o">-</span><span class="n">y</span><span class="w"> </span><span class="o">&&</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">upgrade</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">apt</span><span class="o">-</span><span class="n">get</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">python3</span><span class="w"> </span><span class="n">python3</span><span class="o">-</span><span class="n">pip</span><span class="w"> </span><span class="n">cron</span><span class="w"> </span><span class="o">-</span><span class="n">y</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">pip3</span><span class="w"> </span><span class="n">install</span><span class="w"> </span><span class="n">boto3</span>
|
||||
|
||||
<span class="n">RUN</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span><span class="n">l</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">cat</span><span class="p">;</span><span class="w"> </span><span class="n">echo</span><span class="w"> </span><span class="s2">"0 */6 * * * python3 /home/helper_scripts/download_duckdb.py"</span><span class="p">;</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">crontab</span><span class="w"> </span><span class="o">-</span>
|
||||
|
||||
<span class="n">CMD</span><span class="w"> </span><span class="p">[</span><span class="s2">"bash"</span><span class="p">,</span><span class="w"> </span><span class="s2">"/home/entrypoint.sh"</span><span class="p">]</span>
|
||||
</code></pre></div>
|
||||
|
||||
<p>And there we have it... an in memory containerised reporting solution with blazing fast capability to aggregate and build reports based on curated data direct from the business.. fully automated and deployable via CI/CD, that provides data updates daily.</p>
|
||||
<p>Now the embedded part.. which isn't built yet but I'll make sure to update you once we have/if we do because the architecture is very exciting for an embbdedded reporting workflow that is deployable via CI/CD processes to applications. As a little taster I'll point you to the <a href="https://www.metabase.com/learn/administration/git-based-workflow">metabase documentation</a>, the unfortunate thing about it is Metabase <em>have</em> hidden this behind the enterprise license.. but I can absolutely see why. If we get to implementing this I'll be sure to update you here on the learnings.</p>
|
||||
<p>Until then....</p>
|
||||
</article>
|
||||
|
||||
<hr>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,171 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/data-analytics.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
|
||||
|
||||
|
||||
<meta name="tags" contents="data engineering" />
|
||||
<meta name="tags" contents="Data Analytics" />
|
||||
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
|
||||
<meta property="og:type" content="article">
|
||||
<meta property="article:author" content="">
|
||||
<meta property="og:url" content="http://localhost:8000/notebook-or-bi.html">
|
||||
<meta property="og:title" content="Notebook or BI, What is the most appropiate communication medium">
|
||||
<meta property="og:description" content="">
|
||||
<meta property="og:image" content="http://localhost:8000/">
|
||||
<meta property="article:published_time" content="2023-07-13 20:00:00+10:00">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('http://localhost:8000/theme/images/post-bg.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Notebook or BI, What is the most appropiate communication medium</h1>
|
||||
<span class="meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Thu 13 July 2023
|
||||
</span>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<!-- Post Content -->
|
||||
<article>
|
||||
<p>I want to preface this post by saying I think "Dashboards" or "BI" as terms are wayyyyyyyyyyyyyyyyy over saturated in the market. There seems to be a belief that any question answerable in data deserves the work associated with a dashboard when in fact a simple one off report, or notebook, would be more than enough.</p>
|
||||
</article>
|
||||
|
||||
<hr>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<p>
|
||||
<script type="text/javascript" src="https://sessionize.com/api/speaker/sessions/83c5d14a-bd19-46b4-8335-0ac8358ac46d/0x0x91929ax">
|
||||
</script>
|
||||
</p>
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://twitter.com/ar17787">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-twitter fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://facebook.com/ar17787">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-facebook fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="https://github.com/armistace">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,303 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/server-architecture.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
|
||||
|
||||
|
||||
<meta name="tags" contents="proxmox" />
|
||||
<meta name="tags" contents="kubernetes" />
|
||||
<meta name="tags" contents="hardware" />
|
||||
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
|
||||
<meta property="og:type" content="article">
|
||||
<meta property="article:author" content="">
|
||||
<meta property="og:url" content="http://localhost:8000/proxmox-cluster-1.html">
|
||||
<meta property="og:title" content="Building a 5 node Proxmox cluster!">
|
||||
<meta property="og:description" content="">
|
||||
<meta property="og:image" content="http://localhost:8000/">
|
||||
<meta property="article:published_time" content="2024-07-24 20:00:00+10:00">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('http://localhost:8000/theme/images/post-bg.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Building a 5 node Proxmox cluster!</h1>
|
||||
<span class="meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Wed 24 July 2024
|
||||
</span>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<!-- Post Content -->
|
||||
<article>
|
||||
<h4>A quick summary of this post by AI</h4>
|
||||
<p>I'm going to use AI to summarise this post here because it ended up quite long I've edited it ;) </p>
|
||||
<p><strong>Summary:</strong></p>
|
||||
<p>Quick look at some of the things I've used Proxmox fr</p>
|
||||
<ul>
|
||||
<li>I've set up LXC containers for various services like Plex, databases (PostgreSQL, MySQL, MongoDB), Nginx, and file serving, taking advantage of Proxmox's ease of use and integration with standard Linux tools.</li>
|
||||
<li>I'm using QEMU-based virtual machines (VMs) sparingly due to resource concerns, but have set up a simple Kubernetes cluster across three nodes (Intel NUCs) using VMs. Additionally, you have a development VM for remote coding environments.</li>
|
||||
<li>My current plans include writing about your Kubernetes setup, Gitea CI/CD pipelines, and other tools like n8n, Grafana, and Matrix.</li>
|
||||
</ul>
|
||||
<p>As part of the summary it came up with this interesting idea of "follow up" I'm leaving it here as I thought it was an interesting take on what I can write about in the future</p>
|
||||
<p><strong>Follow-up Questions:</strong></p>
|
||||
<ol>
|
||||
<li><strong>Kubernetes Cluster:</strong></li>
|
||||
<li>What challenges did you face while setting up your Kubernetes cluster with k3s and Longhorn? How did you troubleshoot and eventually stabilize the system?</li>
|
||||
<li>
|
||||
<p>How have you configured resource allocation for your Kubernetes nodes to balance performance and efficiency?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>CI/CD with Gitea:</strong></p>
|
||||
</li>
|
||||
<li>Can you provide more details on how you're integrating LXC containers with your Gitea CI/CD pipelines? What steps are involved in setting up this process?</li>
|
||||
<li>
|
||||
<p>What triggers deployments or builds in your CI/CD setup, and how do you handle failures or errors?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Monitoring and Logging:</strong></p>
|
||||
</li>
|
||||
<li>How have you configured monitoring and logging for your Proxmox setup? Are you using tools like Prometheus, Grafana, or others to keep track of your systems' health?</li>
|
||||
<li>
|
||||
<p>How do you ensure the security and privacy of your data while utilizing these tools?</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Future Plans:</strong></p>
|
||||
</li>
|
||||
<li>You mentioned exploring the idea of having Mistral AI write blog posts based on your notes. Can you elaborate more on this concept? What challenges might arise, and how do you plan to address them?</li>
|
||||
<li>Are there any other new technologies or projects you're considering for your homelab in the near future?</li>
|
||||
</ol>
|
||||
<h2>A Picture is worth a thousand words</h2>
|
||||
<p><img alt="Proxmox Image" height="auto" width="100%" src="http://localhost:8000/images/proxmox.jpg"></p>
|
||||
<p><em>Yes I know the setup is a bit hacky but it works. Below is an image of the original architecture its changed a bit but you sort of get what's going on</em></p>
|
||||
<p><img alt="Proxmox Architecture" height="auto" width="100%" src="http://localhost:8000/images/Server_Initial_Architecture.png"></p>
|
||||
<h2>The idea</h2>
|
||||
<p>For some time now I have been toying with the idea of a hypervisor. Initially my thoughts were to get some old blade servers and use those. That was until someone pointed out there power requirements. Looking at specs for some of these machines the power supplies would be 600 to 800 watts, which is fine until you realise that these have redundant powersupplies and are now potentially pulling up 1.5kW of energy... I'm not made of money!</p>
|
||||
<p>I eventually decided I'd use some hardware I had already lying around, including the old server, as well as 3 Old Intel Nuc I could pick up for under $100 (4th gen core i5's upgraded to 16GB RAM DDR3). I'd also use an old Dell Workstation I had lying around to provide space for some storage, it currently has 4TB RAID 1 on BTRFS sharing via NFS.</p>
|
||||
<p>All together the 5 machines draw less that 600W of power, cool, hardware sorted (at least for a little hobby cluster)</p>
|
||||
<h3>The platform for the Idea!</h3>
|
||||
<p>After doing some amazing reddit research and looking at various homelab ideas for doing what I wanted it became very very clear the proxmx was going to the solution. Its a debian based, open source hypervisor that, for the cost of an annoying little nag when you log in and some manual deb repo congif, gives you an enterprise grade hypervisor ready to spin up VM's and "LXC's" or Linux Jails...These have turned out to be really really useful but more on that later.</p>
|
||||
<p>First lets define what on earth Proxmox is</p>
|
||||
<h4>Proxmox</h4>
|
||||
<p>Proxmox VE (Virtual Environment) is an open-source server virtualization platform that has gained significant popularity among home lab enthusiasts due to its robustness, ease of use, and impressive feature set. Here's why Proxmox stands out as a fantastic choice for homelab clusters:</p>
|
||||
<ol>
|
||||
<li><strong>Simultaneous Management of LXC Containers and VMs:</strong>
|
||||
Proxmox VE allows you to manage both Linux Container (LXC) guests and Virtual Machines (VMs) under a single, intuitive web interface or via the command line. This makes it incredibly convenient to run diverse workloads on your homelab cluster.</li>
|
||||
</ol>
|
||||
<p>For instance, you might use LXC containers for lightweight tasks like web servers, mail servers, or development environments due to their low overhead and fast start-up times. Meanwhile, VMs are perfect for heavier workloads that require more resources or require full system isolation, such as database servers or Windows-based applications.</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Efficient Resource Allocation:</strong>
|
||||
Proxmox VE provides fine-grained control over resource allocation, allowing you to specify resource limits (CPU, memory, disk I/O) for both LXC containers and VMs on a per-guest basis. This ensures that your resources are used efficiently, even when running mixed workloads.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Live Migration:</strong>
|
||||
One of the standout features of Proxmox VE is its support for live migration of both LXC containers and VMs between nodes in your cluster. This enables you to balance workloads dynamically, perform maintenance tasks without downtime, and make the most out of your hardware resources.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>High Availability:</strong>
|
||||
The built-in high availability feature allows you to set up automatic failover for your critical services running as LXC containers or VMs. In case of a node failure, Proxmox VE will automatically migrate the guests to another node in the cluster, ensuring minimal downtime.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Open-Source and Free:</strong>
|
||||
Being open-source and free (with optional paid support), Proxmox VE is an attractive choice for budget-conscious home lab enthusiasts who want to explore server virtualization without breaking the bank. It also offers a large community of users and developers, ensuring continuous improvement and innovation.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>Proxmox VE is an incredibly useful platform for homelab clusters due to its ability to manage both LXC containers and VMs efficiently, along with its advanced features like live migration and high availability. Whether you're looking to run diverse workloads or experiment with virtualization technologies, Proxmox VE is definitely worth considering.</p>
|
||||
<p><strong>Relevant Links:</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>Official Proxmox VE website: <a href="https://www.proxmox.com/">https://www.proxmox.com/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE documentation: <a href="https://pve-proxmox-community.org/">https://pve-proxmox-community.org/</a></p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Proxmox VE forums: <a href="https://forum.proxmox.com/">https://forum.proxmox.com/</a></p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>I'd like to thank the mistral-nemo LLM for writing that ;) </p>
|
||||
<h3>LXC's</h3>
|
||||
<p>To start to understand proxmox we do need to focus in on one important piece, LXC's these are containers but not docker container, below I've had mistral summarise some of the differences.</p>
|
||||
<p><strong>Isolation Level</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC uses Linux's built-in features like cgroups and namespaces for containerization. This provides a high degree of isolation between containers.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker also uses these features but it adds an additional layer called the "Docker Engine" which manages many aspects of the containers, including networking, storage, etc.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>System Call Filtering</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC does not have system call filtering by default. This means that processes inside LXC containers can make any syscall available on the host.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides system call filtering with its "rootless" mode or using a tool like AppArmor, which restricts the capabilities of processes running in containers.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Resource Management</strong></p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>LXC has built-in support for cgroup hierarchy management and does not enforce strict limits by default.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker enforces strict resource limits on every container by default.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p><strong>Networking</strong>:</p>
|
||||
<ul>
|
||||
<li>
|
||||
<p>In LXC, each container gets its own network namespace but IP addresses are shared by default. Networking is managed using traditional Linux tools like <code>ip</code> or <code>bridge-utils</code>.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p>Docker provides a custom networking model with features like user-defined networks, service discovery, and automatic swarm mode integration.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<p>What LXC is Focused On:</p>
|
||||
<p>Given these differences, here's what LXC primarily focuses on:</p>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Simplicity and Lightweightness</strong>: LXC aims to provide a lightweight containerization solution by utilizing only Linux's built-in features with minimal overhead. This makes it appealing for systems where resource usage needs to be kept at a minimum.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Control and Flexibility</strong>: By not adding an extra layer like Docker Engine, LXC gives users more direct control over their containers. This can make it easier to manage complex setups or integrate with other tools.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Integration with Traditional Linux Tools</strong>: Since LXC uses standard Linux tools for networking (like <code>ip</code> and <code>bridge-utils</code>) and does not add its own layer, it integrates well with traditional Linux systems administration practices.</p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Use Cases Where Fine-grained Control is Required</strong>: Because of its flexible nature, LXC can be useful in scenarios where fine-grained control over containerization is required. For example, in scientific computing clusters or high-performance computing environments where every bit of performance matters.</p>
|
||||
</li>
|
||||
</ol>
|
||||
<p>So, while Docker provides a more polished and feature-rich container ecosystem, LXC offers a simple, lightweight, and flexible alternative for those who prefer to have more direct control over their containers and prefer using standard Linux tools.</p>
|
||||
<p>Ever since I discovered Proxmox LXC containers, my server management has been a breeze. For my Plex setup, it's perfect - isolating each instance and keeping resources in check but by using device loading I can get a graphics card there for some sweet sweet hardware decoding. Same goes for my databases; PostgreSQL, MySQL, and MongoDB all run smoothly as individual LXCs. Nginx, too, has found its home here, handling reverse proxy duties without breaking a sweat. And for fileservering, what could be better than having a dedicated LXC for that? It's like having my own little server farm right at my fingertips!</p>
|
||||
<p>The LXC's have also been super easy to set up with the help of ttecks helper scripts <a href="https://community-scripts.github.io/Proxmox/">Proxmox Helper Scripts</a> It was very sad to hear he had gotten <a href="https://www.reddit.com/r/Proxmox/comments/1gk19gm/ttecks_proxmoxve_helper_scripts_changes/">sick</a> and I realy hope he gets well soon!</p>
|
||||
<h3>VM's</h3>
|
||||
<p>Proxmox uses the open-source QEMU hypervisor for hardware virtualization, enabling it to create and manage multiple isolated virtual machines on a single physical host. QEMU, which stands for Quick Emulator, is full system emulator that can run different operating systems directly on a host machine's hardware. When used in conjunction with Proxmox's built-in web-based interface and clustering capabilities, QEMU provides numerous advantages for VM management. These include live migration of running VMs between nodes without downtime, efficient resource allocation due to QEMU's lightweight nature, support for both KVM (Kernel-based Virtual Machine) full virtualization and hardware-assisted virtualization technologies like Intel VT-x or AMD-V, and the ability to manage and monitor VMs through Proxmox's intuitive web interface. Additionally, QEMU's open-source nature allows Proxmox users to leverage a large community of developers for ongoing improvements and troubleshooting!</p>
|
||||
<p>Again I'd like to thank mistral-nemo for that very informative piece of prose ;) </p>
|
||||
<p>The big question here is what do I use the VM capablity of Proxmox for?</p>
|
||||
<p>I actually try to avoid their use as I don't want the massive use of resources, however, part of the hardware design I came up with was to use the 3 Old Intel Nuc's as predominately a kubernetes cluster.. and so I have 3 Vm's spread across those nodes that act as my very simple Kubernetes cluster I also have a VM I turn on and off as required that can act as a development machine and gives me remote VS Code or Zed environments. (I look forward to writing a blog post on Zed and How that's gone for me)</p>
|
||||
<p>I do look forward to writing a seperate post about how the kubernetes cluster has gone. I have used k3s and longhorn and it hasn't been a rosy picture, but after a couple months I finally seem to have landed on a stable system</p>
|
||||
<p>Anyways, Hopefully this gives a pretty quick overview of my new cluster and some of the technologies it uses. I hope to write a post in the future about the gitea CI/CD I have set up that leverages kubernetes and LXC's to get deployment pipelines as well as some of the things I'm using n8n, grafana and matrix for but I think for right now myself and mistral need to sign off and get posting. </p>
|
||||
<p>Thanks for reading this suprisingly long post (if you got here) and I look forward to upating you on some of the other cool things I'm experimenting with with this new homelab. (Including an idea I'm starting to form of having my mistral instance actually start to write some blogs on this site using notes I write so that my posting can increase.. but I need to experiment with that a bit more)</p>
|
||||
</article>
|
||||
|
||||
<hr>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,280 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
<link href="http://localhost:8000/feeds/resume.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Categories Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
|
||||
|
||||
|
||||
<meta name="tags" contents="Cover Letter" />
|
||||
<meta name="tags" contents="Resume" />
|
||||
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
|
||||
<meta property="og:type" content="article">
|
||||
<meta property="article:author" content="">
|
||||
<meta property="og:url" content="http://localhost:8000/resume.html">
|
||||
<meta property="og:title" content="A Resume">
|
||||
<meta property="og:description" content="">
|
||||
<meta property="og:image" content="http://localhost:8000/">
|
||||
<meta property="article:published_time" content="2024-02-23 20:00:00+10:00">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('http://localhost:8000/theme/images/post-bg.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>A Resume</h1>
|
||||
<span class="meta">Posted by
|
||||
<a href="http://localhost:8000/author/andrew-ridgway.html">Andrew Ridgway</a>
|
||||
on Fri 23 February 2024
|
||||
</span>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<!-- Post Content -->
|
||||
<article>
|
||||
<h1>OVERVIEW</h1>
|
||||
<p>I am a Senior Data Engineer looking to transition my skills to Data and Solution
|
||||
Architecting as well as project management. I have spent the better part of the
|
||||
last decade refining my abilities in taking business requirements and turning
|
||||
those into actionable data engineering, analytics, and software projects with
|
||||
trackable metrics. I believe in agnosticism when it comes to coding languages
|
||||
and have experimented in my own time with many different languages. In my
|
||||
career I have used Python, .NET, PowerShell, TSQL, VB and SAS (multiple
|
||||
products) in an Enterprise capacity. I also have experience using Google Cloud
|
||||
Platform and AWS tools for ETL and data platform development as well as git
|
||||
for version control and deployment using various IAC tools. I have also
|
||||
conducted data analysis and modelling on business metrics to find relationships
|
||||
between both staff and customer behavior and produced actionable
|
||||
recommendations based on the conclusions. In a private context I have also
|
||||
experimented with C, C# and Kotlin I am looking to further my career by taking
|
||||
my passion for data engineering and analysis as well as web and software
|
||||
development and applying it in a strategic context.</p>
|
||||
<h1>SKILLS & ABILITIES</h1>
|
||||
<ul>
|
||||
<li>Python (scripting, compiling, notebooks – Sagemaker, Jupyter)</li>
|
||||
<li>git</li>
|
||||
<li>SAS (Base, EG, VA)</li>
|
||||
<li>Various Google Cloud Tools (Data Fusion, Compute Engine, Cloud Functions)</li>
|
||||
<li>Various Amazon Tools (EC2, RDS, Kinesis, Glue, Redshift, Lambda, ECS, ECR, EKS)</li>
|
||||
<li>Streaming Technologies (Kafka, Hive, Spark Streaming)</li>
|
||||
<li>Various DB platforms both on Prem and Serverless (MariaDB/MySql,</li>
|
||||
<li>Postgres/Redshift, SQL Server, RDS/Aurora variants)</li>
|
||||
<li>Various Microsoft Products (PowerBI, TSQL, Excel, VBA)</li>
|
||||
<li>Linux Server Administration (cron, bash, systemD)</li>
|
||||
<li>ETL/ELT Development</li>
|
||||
<li>Basic Data Modelling (Kimball, SCD Type 2)</li>
|
||||
<li>IAC (Cloud Formation, Terraform)</li>
|
||||
<li>Datahub Deployment</li>
|
||||
<li>Dagster Orchestration Deployments</li>
|
||||
<li>DBT Modelling and Design Deployments</li>
|
||||
<li>Containerised and Cloud Driven Data Architecture</li>
|
||||
</ul>
|
||||
<h1>EXPERIENCE</h1>
|
||||
<h2>Cloud Data Architect</h2>
|
||||
<h3><em>Redeye Apps</em></h3>
|
||||
<h4><em>May 2022 - Present</em></h4>
|
||||
<ul>
|
||||
<li>Greenfields Research, Design and Deployment of S3 datalake (Parquet)</li>
|
||||
<li>AWS DMS, S3, Athena, Glue</li>
|
||||
<li>Research Design and Deployment of Catalog (Datahub)</li>
|
||||
<li>Design of Data Governance Process (Datahub driven)</li>
|
||||
<li>Research Design and Deployment of Orchestration and Modelling for Transforms (Dagster/DBT into Mesos)</li>
|
||||
<li>CI/CD design and deployment of modelling and orchestration using Gitlab</li>
|
||||
<li>Research, Design and Deployment of ML Ops Dev pipelines anddeployment strategy</li>
|
||||
<li>Design of ETL/Pipelines (DBT)</li>
|
||||
<li>Design of Customer Facing Data Products and deployment methodologies (Fully automated via Kakfa/Dagster/DBT)</li>
|
||||
</ul>
|
||||
<h2>Data Engineer,</h2>
|
||||
<h3><em>TechConnect IT Solutions</em></h3>
|
||||
<h4><em>August 2021 – May 2022</em></h4>
|
||||
<ul>
|
||||
<li>Design of Cloud Data Batch ETL solutions using Python (Glue)</li>
|
||||
<li>Design of Cloud Data Streaming ETL solution using Python (Kinesis)</li>
|
||||
<li>Solve complex client business problems using software to join and transform data from DB’s, Web API’s, Application API’s and System logs</li>
|
||||
<li>Build CI/CD pipelines to ensure smooth deployments (Bitbucket, gitlab)</li>
|
||||
<li>Apply Prebuilt ML models to software solutions (Sagemaker)</li>
|
||||
<li>Assist with the architecting of Containerisation solutions (Docker, ECS, ECR)</li>
|
||||
<li>API testing and development (gRPC, Rest)</li>
|
||||
</ul>
|
||||
<h2>Enterprise Data Warehouse Developer</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>August 2019 - August 2021</em></h4>
|
||||
<ul>
|
||||
<li>ETL development of CRM, WFP, Outbound Dialer, Inbound switch in Google Cloud, SAS, TSQL</li>
|
||||
<li>Bringing new data to the business to analyse for new insights</li>
|
||||
<li>Redeveloped Version Control and brought git to the data team</li>
|
||||
<li>Introduced python for API enablement in the Enterprise Data Warehouse</li>
|
||||
<li>Partnering with the business to focus data project on actual need and translating into technical requirements</li>
|
||||
</ul>
|
||||
<h2>Business Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2018 - August 2019</em></h4>
|
||||
<ul>
|
||||
<li>Automate Service Performance Reporting using PowerShell/VBA/SAS</li>
|
||||
<li>Learn and leverage SAS EG and VA to streamline Microsoft Excel Reporting</li>
|
||||
<li>Identify and develop data pipelines to source data from multiple sources easily and collate into a single source to identify relationships and trends</li>
|
||||
<li>Technologies used include VBA, PowerShell, SQL, Web API’s, SAS</li>
|
||||
<li>Where SAS is inappropriate use VBA to automate processes in Microsoft Access and Excel</li>
|
||||
<li>Gather Requirements to build meaningful reporting solutions</li>
|
||||
<li>Provide meaningful analysis on business performance and provide relevant presentations and reports to senior stakeholders.</li>
|
||||
</ul>
|
||||
<h2>Forecasting and Capacity Analyst</h2>
|
||||
<h3><em>Auto and General Insurance</em></h3>
|
||||
<h4><em>January 2017 – January 2018</em></h4>
|
||||
<ul>
|
||||
<li>Develop the outbound forecasting model for the Auto and General sales call center by analysing the relationship between customer decisions and workload drivers</li>
|
||||
<li>This includes the complete data pipeline for the model from identifying and sourcing data, building the reporting and analysing the data and associated drivers.</li>
|
||||
<li>Forecast inbound workload requirements for the Auto and General sales call center using time series analysis</li>
|
||||
<li>Learn and leverage the Aspect Workforce Management System to ensure efficiency of forecast generation</li>
|
||||
<li>Learn and leverage the capabilities of SAS Enterprise Guide to improve accuracy</li>
|
||||
<li>Liaise with people across the business to ensure meaningful, accurate analysis is provided to senior stakeholders</li>
|
||||
<li>Analyse monthly, weekly and intraday requirements and ensure forecast is accurately predicting workload for breaks, meetings and Leave</li>
|
||||
</ul>
|
||||
<h2>Senior HR Performance Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>June 2016 - January 2017</em></h4>
|
||||
<ul>
|
||||
<li>Harmonise various systems to develop a unified workforce reporting and analysis framework with appropriate metrics</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h2>Workforce Business Analyst</h2>
|
||||
<h3><em>Queensland Department of Justice and Attorney General</em></h3>
|
||||
<h4><em>July 2015 – June 2016</em></h4>
|
||||
<ul>
|
||||
<li>Develop and refine current workforce analysis techniques and databases</li>
|
||||
<li>Use VBA to automate regular reporting in Microsoft Access and Excel</li>
|
||||
<li>Act as liaison between shared service providers and executives and facilitate communication during the implementation of a payroll leave audit</li>
|
||||
<li>Gather reporting requirements from various business areas and produce ad-hoc and regular reports as required</li>
|
||||
<li>Participate in government process through the production of briefs including Questions on Notice and Estimates Briefs for departmental executives</li>
|
||||
</ul>
|
||||
<h1>EDUCATION</h1>
|
||||
<ul>
|
||||
<li>2011 Bachelor of Business Management, University of Queensland</li>
|
||||
<li>2008 Bachelor of Arts, University of Queensland</li>
|
||||
</ul>
|
||||
<h1>REFERENCES</h1>
|
||||
<ul>
|
||||
<li>Anthony Stiller Lead Developer, Data warehousing, Queensland Health</li>
|
||||
</ul>
|
||||
<p><em>0428 038 031</em></p>
|
||||
<ul>
|
||||
<li>Jaime Brian Head of Cloud Ninjas, TechConnect</li>
|
||||
</ul>
|
||||
<p><em>0422 012 17</em></p>
|
||||
</article>
|
||||
|
||||
<hr>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
@ -1,135 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<meta name="description" content="">
|
||||
<meta name="author" content="">
|
||||
|
||||
<title>Andrew Ridgway's Blog - Tags</title>
|
||||
|
||||
<link href="http://localhost:8000/feeds/all.atom.xml" type="application/atom+xml" rel="alternate" title="Andrew Ridgway's Blog Full Atom Feed" />
|
||||
|
||||
<!-- Bootstrap Core CSS -->
|
||||
<link href="http://localhost:8000/theme/css/bootstrap.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom CSS -->
|
||||
<link href="http://localhost:8000/theme/css/clean-blog.min.css" rel="stylesheet">
|
||||
|
||||
<!-- Code highlight color scheme -->
|
||||
<link href="http://localhost:8000/theme/css/code_blocks/tomorrow.css" rel="stylesheet">
|
||||
|
||||
<!-- Custom Fonts -->
|
||||
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
|
||||
<link href='http://fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic' rel='stylesheet' type='text/css'>
|
||||
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
|
||||
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
<meta property="og:locale" content="en">
|
||||
<meta property="og:site_name" content="Andrew Ridgway's Blog">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
|
||||
<!-- Navigation -->
|
||||
<nav class="navbar navbar-default navbar-custom navbar-fixed-top">
|
||||
<div class="container-fluid">
|
||||
<!-- Brand and toggle get grouped for better mobile display -->
|
||||
<div class="navbar-header page-scroll">
|
||||
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
|
||||
<span class="sr-only">Toggle navigation</span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="http://localhost:8000/">Andrew Ridgway's Blog</a>
|
||||
</div>
|
||||
|
||||
<!-- Collect the nav links, forms, and other content for toggling -->
|
||||
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
|
||||
<ul class="nav navbar-nav navbar-right">
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
<!-- /.navbar-collapse -->
|
||||
</div>
|
||||
<!-- /.container -->
|
||||
</nav>
|
||||
|
||||
<!-- Page Header -->
|
||||
<header class="intro-header" style="background-image: url('https://wallpaperaccess.com/full/3239444.jpg')">
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<div class="post-heading">
|
||||
<h1>Andrew Ridgway's Blog - Tags</h1>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<h1>Tags for Andrew Ridgway's Blog</h1> <li><a href="http://localhost:8000/tag/amazon.html">Amazon</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/containers.html">containers</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/cover-letter.html">Cover Letter</a> (2)</li>
|
||||
<li><a href="http://localhost:8000/tag/data-engineering.html">data engineering</a> (3)</li>
|
||||
<li><a href="http://localhost:8000/tag/duckdb.html">DuckDB</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/embedded.html">embedded</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/hardware.html">hardware</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/kubernetes.html">kubernetes</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/managed-services.html">Managed Services</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/metabase.html">Metabase</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/proxmox.html">proxmox</a> (1)</li>
|
||||
<li><a href="http://localhost:8000/tag/resume.html">Resume</a> (2)</li>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer>
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
|
||||
<ul class="list-inline text-center">
|
||||
<li>
|
||||
<a href="https://git.aridgwayweb.com/explore/repos">
|
||||
<span class="fa-stack fa-lg">
|
||||
<i class="fa fa-circle fa-stack-2x"></i>
|
||||
<i class="fa fa-github fa-stack-1x fa-inverse"></i>
|
||||
</span>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<p class="copyright text-muted">Blog powered by <a href="http://getpelican.com">Pelican</a>,
|
||||
which takes great advantage of <a href="http://python.org">Python</a>.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
<!-- jQuery -->
|
||||
<script src="http://localhost:8000/theme/js/jquery.js"></script>
|
||||
|
||||
<!-- Bootstrap Core JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/bootstrap.min.js"></script>
|
||||
|
||||
<!-- Custom Theme JavaScript -->
|
||||
<script src="http://localhost:8000/theme/js/clean-blog.min.js"></script>
|
||||
|
||||
</body>
|
||||
|
||||
</html>
|
6358
src/output/theme/css/bootstrap.css
vendored
5
src/output/theme/css/bootstrap.min.css
vendored
@ -1,400 +0,0 @@
|
||||
/*!
|
||||
* Clean Blog v1.0.0 (http://startbootstrap.com)
|
||||
* Copyright 2014 Start Bootstrap
|
||||
* Licensed under Apache 2.0 (https://github.com/IronSummitMedia/startbootstrap/blob/gh-pages/LICENSE)
|
||||
*/
|
||||
|
||||
body {
|
||||
font-family: 'Open Sans', 'Lora','Times New Roman',serif;
|
||||
font-size: 18px;
|
||||
font-weight: 300;
|
||||
color: #404040;
|
||||
}
|
||||
|
||||
p {
|
||||
line-height: 1.5;
|
||||
margin: 30px 0;
|
||||
}
|
||||
p a {
|
||||
text-decoration: underline;
|
||||
}
|
||||
h1,
|
||||
h2,
|
||||
h3,
|
||||
h4,
|
||||
h5,
|
||||
h6 {
|
||||
font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
|
||||
font-weight: 800;
|
||||
}
|
||||
a {
|
||||
color: #404040;
|
||||
}
|
||||
a:hover,
|
||||
a:focus {
|
||||
color: #0085a1;
|
||||
}
|
||||
a img:hover,
|
||||
a img:focus {
|
||||
cursor: zoom-in;
|
||||
}
|
||||
blockquote {
|
||||
color: #808080;
|
||||
font-style: italic;
|
||||
}
|
||||
pre {
|
||||
background-color: transparent;
|
||||
}
|
||||
hr.small {
|
||||
max-width: 100px;
|
||||
margin: 15px auto;
|
||||
border-width: 4px;
|
||||
border-color: white;
|
||||
}
|
||||
.navbar-custom {
|
||||
position: absolute;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
z-index: 3;
|
||||
font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
|
||||
}
|
||||
.navbar-custom .navbar-brand {
|
||||
font-weight: 800;
|
||||
}
|
||||
.navbar-custom .nav li a {
|
||||
text-transform: uppercase;
|
||||
font-size: 12px;
|
||||
font-weight: 800;
|
||||
letter-spacing: 1px;
|
||||
}
|
||||
@media only screen and (min-width: 768px) {
|
||||
.navbar-custom {
|
||||
background: transparent;
|
||||
border-bottom: 1px solid transparent;
|
||||
}
|
||||
.navbar-custom .navbar-brand {
|
||||
color: white;
|
||||
padding: 20px;
|
||||
}
|
||||
.navbar-custom .navbar-brand:hover,
|
||||
.navbar-custom .navbar-brand:focus {
|
||||
color: rgba(255, 255, 255, 0.8);
|
||||
}
|
||||
.navbar-custom .nav li a {
|
||||
color: white;
|
||||
padding: 20px;
|
||||
}
|
||||
.navbar-custom .nav li a:hover,
|
||||
.navbar-custom .nav li a:focus {
|
||||
color: rgba(255, 255, 255, 0.8);
|
||||
}
|
||||
}
|
||||
@media only screen and (min-width: 1170px) {
|
||||
.navbar-custom {
|
||||
-webkit-transition: background-color 0.3s;
|
||||
-moz-transition: background-color 0.3s;
|
||||
transition: background-color 0.3s;
|
||||
/* Force Hardware Acceleration in WebKit */
|
||||
-webkit-transform: translate3d(0, 0, 0);
|
||||
-moz-transform: translate3d(0, 0, 0);
|
||||
-ms-transform: translate3d(0, 0, 0);
|
||||
-o-transform: translate3d(0, 0, 0);
|
||||
transform: translate3d(0, 0, 0);
|
||||
-webkit-backface-visibility: hidden;
|
||||
backface-visibility: hidden;
|
||||
}
|
||||
.navbar-custom.is-fixed {
|
||||
/* when the user scrolls down, we hide the header right above the viewport */
|
||||
position: fixed;
|
||||
top: -61px;
|
||||
background-color: rgba(255, 255, 255, 0.9);
|
||||
border-bottom: 1px solid #f2f2f2;
|
||||
-webkit-transition: -webkit-transform 0.3s;
|
||||
-moz-transition: -moz-transform 0.3s;
|
||||
transition: transform 0.3s;
|
||||
}
|
||||
.navbar-custom.is-fixed .navbar-brand {
|
||||
color: #404040;
|
||||
}
|
||||
.navbar-custom.is-fixed .navbar-brand:hover,
|
||||
.navbar-custom.is-fixed .navbar-brand:focus {
|
||||
color: #0085a1;
|
||||
}
|
||||
.navbar-custom.is-fixed .nav li a {
|
||||
color: #404040;
|
||||
}
|
||||
.navbar-custom.is-fixed .nav li a:hover,
|
||||
.navbar-custom.is-fixed .nav li a:focus {
|
||||
color: #0085a1;
|
||||
}
|
||||
.navbar-custom.is-visible {
|
||||
/* if the user changes the scrolling direction, we show the header */
|
||||
-webkit-transform: translate3d(0, 100%, 0);
|
||||
-moz-transform: translate3d(0, 100%, 0);
|
||||
-ms-transform: translate3d(0, 100%, 0);
|
||||
-o-transform: translate3d(0, 100%, 0);
|
||||
transform: translate3d(0, 100%, 0);
|
||||
}
|
||||
}
|
||||
.intro-header {
|
||||
background-color: #808080;
|
||||
background: no-repeat center center;
|
||||
background-attachment: scroll;
|
||||
-webkit-background-size: cover;
|
||||
-moz-background-size: cover;
|
||||
background-size: cover;
|
||||
-o-background-size: cover;
|
||||
margin-bottom: 50px;
|
||||
}
|
||||
.intro-header .site-heading,
|
||||
.intro-header .post-heading,
|
||||
.intro-header .page-heading {
|
||||
padding: 100px 0 50px;
|
||||
color: white;
|
||||
}
|
||||
@media only screen and (min-width: 768px) {
|
||||
.intro-header .site-heading,
|
||||
.intro-header .post-heading,
|
||||
.intro-header .page-heading {
|
||||
padding: 150px 0;
|
||||
}
|
||||
}
|
||||
.intro-header .site-heading,
|
||||
.intro-header .page-heading {
|
||||
text-align: center;
|
||||
}
|
||||
.intro-header .site-heading h1,
|
||||
.intro-header .page-heading h1 {
|
||||
margin-top: 0;
|
||||
font-size: 50px;
|
||||
}
|
||||
.intro-header .site-heading .subheading,
|
||||
.intro-header .page-heading .subheading {
|
||||
font-size: 24px;
|
||||
line-height: 1.1,
|
||||
display: block;
|
||||
font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
|
||||
font-weight: 300;
|
||||
margin: 10px 0 0;
|
||||
}
|
||||
@media only screen and (min-width: 768px) {
|
||||
.intro-header .site-heading h1,
|
||||
.intro-header .page-heading h1 {
|
||||
font-size: 80px;
|
||||
}
|
||||
}
|
||||
.intro-header .post-heading h1 {
|
||||
font-size: 35px;
|
||||
}
|
||||
.intro-header .post-heading .subheading,
|
||||
.intro-header .post-heading .meta {
|
||||
line-height: 1.1;
|
||||
display: block;
|
||||
}
|
||||
.intro-header .post-heading .subheading {
|
||||
font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
|
||||
font-size: 24px;
|
||||
margin: 10px 0 30px;
|
||||
font-weight: 600;
|
||||
}
|
||||
.intro-header .post-heading .meta {
|
||||
font-family: 'Lora', 'Times New Roman', serif;
|
||||
font-style: italic;
|
||||
font-weight: 300;
|
||||
font-size: 20px;
|
||||
}
|
||||
.intro-header .post-heading .meta a {
|
||||
color: white;
|
||||
}
|
||||
@media only screen and (min-width: 768px) {
|
||||
.intro-header .post-heading h1 {
|
||||
font-size: 55px;
|
||||
}
|
||||
.intro-header .post-heading .subheading {
|
||||
font-size: 30px;
|
||||
}
|
||||
}
|
||||
.post-preview > a {
|
||||
color: #404040;
|
||||
}
|
||||
.post-preview > a:hover,
|
||||
.post-preview > a:focus {
|
||||
text-decoration: none;
|
||||
color: #0085a1;
|
||||
}
|
||||
.post-preview > a > .post-title {
|
||||
font-size: 30px;
|
||||
margin-top: 30px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
.post-preview > a > .post-subtitle {
|
||||
margin: 0;
|
||||
font-weight: 300;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
.post-preview > .post-meta {
|
||||
color: #808080;
|
||||
font-size: 18px;
|
||||
font-style: italic;
|
||||
margin-top: 0;
|
||||
}
|
||||
.post-preview > .post-meta > a {
|
||||
text-decoration: none;
|
||||
color: #404040;
|
||||
}
|
||||
.post-preview > .post-meta > a:hover,
|
||||
.post-preview > .post-meta > a:focus {
|
||||
color: #0085a1;
|
||||
text-decoration: underline;
|
||||
}
|
||||
@media only screen and (min-width: 768px) {
|
||||
.post-preview > a > .post-title {
|
||||
font-size: 36px;
|
||||
}
|
||||
}
|
||||
.section-heading {
|
||||
font-size: 36px;
|
||||
margin-top: 60px;
|
||||
font-weight: 700;
|
||||
}
|
||||
.caption {
|
||||
text-align: center;
|
||||
font-size: 14px;
|
||||
padding: 10px;
|
||||
font-style: italic;
|
||||
margin: 0;
|
||||
display: block;
|
||||
border-bottom-right-radius: 5px;
|
||||
border-bottom-left-radius: 5px;
|
||||
}
|
||||
footer {
|
||||
padding: 50px 0 65px;
|
||||
}
|
||||
footer .list-inline {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
footer .copyright {
|
||||
font-size: 14px;
|
||||
text-align: center;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
.floating-label-form-group {
|
||||
font-size: 14px;
|
||||
position: relative;
|
||||
margin-bottom: 0;
|
||||
padding-bottom: 0.5em;
|
||||
border-bottom: 1px solid #eeeeee;
|
||||
}
|
||||
.floating-label-form-group input,
|
||||
.floating-label-form-group textarea {
|
||||
z-index: 1;
|
||||
position: relative;
|
||||
padding-right: 0;
|
||||
padding-left: 0;
|
||||
border: none;
|
||||
border-radius: 0;
|
||||
font-size: 1.5em;
|
||||
background: none;
|
||||
box-shadow: none !important;
|
||||
resize: none;
|
||||
}
|
||||
.floating-label-form-group label {
|
||||
display: block;
|
||||
z-index: 0;
|
||||
position: relative;
|
||||
top: 2em;
|
||||
margin: 0;
|
||||
font-size: 0.85em;
|
||||
line-height: 1.764705882em;
|
||||
vertical-align: middle;
|
||||
vertical-align: baseline;
|
||||
opacity: 0;
|
||||
-webkit-transition: top 0.3s ease,opacity 0.3s ease;
|
||||
-moz-transition: top 0.3s ease,opacity 0.3s ease;
|
||||
-ms-transition: top 0.3s ease,opacity 0.3s ease;
|
||||
transition: top 0.3s ease,opacity 0.3s ease;
|
||||
}
|
||||
.floating-label-form-group::not(:first-child) {
|
||||
padding-left: 14px;
|
||||
border-left: 1px solid #eeeeee;
|
||||
}
|
||||
.floating-label-form-group-with-value label {
|
||||
top: 0;
|
||||
opacity: 1;
|
||||
}
|
||||
.floating-label-form-group-with-focus label {
|
||||
color: #0085a1;
|
||||
}
|
||||
form .row:first-child .floating-label-form-group {
|
||||
border-top: 1px solid #eeeeee;
|
||||
}
|
||||
.btn {
|
||||
font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
|
||||
text-transform: uppercase;
|
||||
font-size: 14px;
|
||||
font-weight: 800;
|
||||
letter-spacing: 1px;
|
||||
border-radius: 0;
|
||||
padding: 15px 25px;
|
||||
}
|
||||
.btn-lg {
|
||||
font-size: 16px;
|
||||
padding: 25px 35px;
|
||||
}
|
||||
.btn-default:hover,
|
||||
.btn-default:focus {
|
||||
background-color: #0085a1;
|
||||
border: 1px solid #0085a1;
|
||||
color: white;
|
||||
}
|
||||
.pager {
|
||||
margin: 20px 0 0;
|
||||
}
|
||||
.pager li > a,
|
||||
.pager li > span {
|
||||
font-family: 'Open Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif;
|
||||
text-transform: uppercase;
|
||||
font-size: 14px;
|
||||
font-weight: 800;
|
||||
letter-spacing: 1px;
|
||||
padding: 15px 25px;
|
||||
background-color: white;
|
||||
border-radius: 0;
|
||||
}
|
||||
.pager li > a:hover,
|
||||
.pager li > a:focus {
|
||||
color: white;
|
||||
background-color: #0085a1;
|
||||
border: 1px solid #0085a1;
|
||||
}
|
||||
.pager .disabled > a,
|
||||
.pager .disabled > a:hover,
|
||||
.pager .disabled > a:focus,
|
||||
.pager .disabled > span {
|
||||
color: #808080;
|
||||
background-color: #404040;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
::-moz-selection {
|
||||
color: white;
|
||||
text-shadow: none;
|
||||
background: #0085a1;
|
||||
}
|
||||
::selection {
|
||||
color: white;
|
||||
text-shadow: none;
|
||||
background: #0085a1;
|
||||
}
|
||||
img::selection {
|
||||
color: white;
|
||||
background: transparent;
|
||||
}
|
||||
img::-moz-selection {
|
||||
color: white;
|
||||
background: transparent;
|
||||
}
|
||||
body {
|
||||
webkit-tap-highlight-color: #0085a1;
|
||||
}
|
5
src/output/theme/css/clean-blog.min.css
vendored
@ -1,38 +0,0 @@
|
||||
/*
|
||||
Darkly Pygments Theme
|
||||
(c) 2014 Sourcey
|
||||
http://sourcey.com
|
||||
*/
|
||||
|
||||
pre {
|
||||
white-space: pre;
|
||||
overflow: auto;
|
||||
word-wrap: normal; /* horizontal scrolling */
|
||||
-moz-border-radius: 3px;
|
||||
-webkit-border-radius: 3px;
|
||||
border-radius: 3px;
|
||||
padding: 20px;
|
||||
background: #343642;
|
||||
color: #C1C2C3;
|
||||
}
|
||||
|
||||
.hll { background-color: #ffc; }
|
||||
.gd { color: #2e3436; background-color: #0e1416; }
|
||||
.gr { color: #eeeeec; background-color: #c00; }
|
||||
.gi { color: #babdb6; background-color: #1f2b2d; }
|
||||
.go { color: #2c3032; background-color: #2c3032; }
|
||||
.kt { color: #e3e7df; }
|
||||
.ni { color: #888a85; }
|
||||
.c,.cm,.c1,.cs { color: #8D9684; }
|
||||
.err,.g,.l,.n,.x,.p,.ge,
|
||||
.gp,.gs,.gt,.ld,.s,.nc,.nd,
|
||||
.ne,.nl,.nn,.nx,.py,.ow,.w,.sb,
|
||||
.sc,.sd,.s2,.se,.sh,.si,.sx,.sr,
|
||||
.s1,.ss,.bp { color: #C1C2C3; }
|
||||
.k,.kc,.kd,.kn,.kp,.kr,
|
||||
.nt { color: #729fcf; }
|
||||
.cp,.gh,.gu,.na,.nf { color: #E9A94B ; }
|
||||
.m,.nb,.no,.mf,.mh,.mi,.mo,
|
||||
.il { color: #8ae234; }
|
||||
.o { color: #989DAA; }
|
||||
.nv,.vc,.vg,.vi { color: #fff; }
|
@ -1,61 +0,0 @@
|
||||
.hll { background-color: #ffffcc }
|
||||
.c { color: #999988; font-style: italic } /* Comment */
|
||||
.err { color: #a61717; background-color: #e3d2d2 } /* Error */
|
||||
.k { color: #000000; font-weight: bold } /* Keyword */
|
||||
.o { color: #000000; font-weight: bold } /* Operator */
|
||||
.cm { color: #999988; font-style: italic } /* Comment.Multiline */
|
||||
.cp { color: #999999; font-weight: bold; font-style: italic } /* Comment.Preproc */
|
||||
.c1 { color: #999988; font-style: italic } /* Comment.Single */
|
||||
.cs { color: #999999; font-weight: bold; font-style: italic } /* Comment.Special */
|
||||
.gd { color: #000000; background-color: #ffdddd } /* Generic.Deleted */
|
||||
.ge { color: #000000; font-style: italic } /* Generic.Emph */
|
||||
.gr { color: #aa0000 } /* Generic.Error */
|
||||
.gh { color: #999999 } /* Generic.Heading */
|
||||
.gi { color: #000000; background-color: #ddffdd } /* Generic.Inserted */
|
||||
.go { color: #888888 } /* Generic.Output */
|
||||
.gp { color: #555555 } /* Generic.Prompt */
|
||||
.gs { font-weight: bold } /* Generic.Strong */
|
||||
.gu { color: #aaaaaa } /* Generic.Subheading */
|
||||
.gt { color: #aa0000 } /* Generic.Traceback */
|
||||
.kc { color: #000000; font-weight: bold } /* Keyword.Constant */
|
||||
.kd { color: #000000; font-weight: bold } /* Keyword.Declaration */
|
||||
.kn { color: #000000; font-weight: bold } /* Keyword.Namespace */
|
||||
.kp { color: #000000; font-weight: bold } /* Keyword.Pseudo */
|
||||
.kr { color: #000000; font-weight: bold } /* Keyword.Reserved */
|
||||
.kt { color: #445588; font-weight: bold } /* Keyword.Type */
|
||||
.m { color: #009999 } /* Literal.Number */
|
||||
.s { color: #d01040 } /* Literal.String */
|
||||
.na { color: #008080 } /* Name.Attribute */
|
||||
.nb { color: #0086B3 } /* Name.Builtin */
|
||||
.nc { color: #445588; font-weight: bold } /* Name.Class */
|
||||
.no { color: #008080 } /* Name.Constant */
|
||||
.nd { color: #3c5d5d; font-weight: bold } /* Name.Decorator */
|
||||
.ni { color: #800080 } /* Name.Entity */
|
||||
.ne { color: #990000; font-weight: bold } /* Name.Exception */
|
||||
.nf { color: #990000; font-weight: bold } /* Name.Function */
|
||||
.nl { color: #990000; font-weight: bold } /* Name.Label */
|
||||
.nn { color: #555555 } /* Name.Namespace */
|
||||
.nt { color: #000080 } /* Name.Tag */
|
||||
.nv { color: #008080 } /* Name.Variable */
|
||||
.ow { color: #000000; font-weight: bold } /* Operator.Word */
|
||||
.w { color: #bbbbbb } /* Text.Whitespace */
|
||||
.mf { color: #009999 } /* Literal.Number.Float */
|
||||
.mh { color: #009999 } /* Literal.Number.Hex */
|
||||
.mi { color: #009999 } /* Literal.Number.Integer */
|
||||
.mo { color: #009999 } /* Literal.Number.Oct */
|
||||
.sb { color: #d01040 } /* Literal.String.Backtick */
|
||||
.sc { color: #d01040 } /* Literal.String.Char */
|
||||
.sd { color: #d01040 } /* Literal.String.Doc */
|
||||
.s2 { color: #d01040 } /* Literal.String.Double */
|
||||
.se { color: #d01040 } /* Literal.String.Escape */
|
||||
.sh { color: #d01040 } /* Literal.String.Heredoc */
|
||||
.si { color: #d01040 } /* Literal.String.Interpol */
|
||||
.sx { color: #d01040 } /* Literal.String.Other */
|
||||
.sr { color: #009926 } /* Literal.String.Regex */
|
||||
.s1 { color: #d01040 } /* Literal.String.Single */
|
||||
.ss { color: #990073 } /* Literal.String.Symbol */
|
||||
.bp { color: #999999 } /* Name.Builtin.Pseudo */
|
||||
.vc { color: #008080 } /* Name.Variable.Class */
|
||||
.vg { color: #008080 } /* Name.Variable.Global */
|
||||
.vi { color: #008080 } /* Name.Variable.Instance */
|
||||
.il { color: #009999 } /* Literal.Number.Integer.Long */
|
@ -1,80 +0,0 @@
|
||||
/*
|
||||
Monokai Pygments Theme
|
||||
*/
|
||||
|
||||
pre {
|
||||
white-space: pre;
|
||||
overflow: auto;
|
||||
word-wrap: normal; /* horizontal scrolling */
|
||||
-moz-border-radius: 3px;
|
||||
-webkit-border-radius: 3px;
|
||||
border-radius: 3px;
|
||||
padding: 20px;
|
||||
background: #343642;
|
||||
color: #C1C2C3;
|
||||
}
|
||||
|
||||
.hll { background-color: #49483e }
|
||||
.c { color: #75715e } /* Comment */
|
||||
.err { color: #960050; background-color: #1e0010 } /* Error */
|
||||
.k { color: #66d9ef } /* Keyword */
|
||||
.l { color: #ae81ff } /* Literal */
|
||||
.n { color: #f8f8f2 } /* Name */
|
||||
.o { color: #f92672 } /* Operator */
|
||||
.p { color: #f8f8f2 } /* Punctuation */
|
||||
.cm { color: #75715e } /* Comment.Multiline */
|
||||
.cp { color: #75715e } /* Comment.Preproc */
|
||||
.c1 { color: #75715e } /* Comment.Single */
|
||||
.cs { color: #75715e } /* Comment.Special */
|
||||
.ge { font-style: italic } /* Generic.Emph */
|
||||
.gs { font-weight: bold } /* Generic.Strong */
|
||||
.kc { color: #66d9ef } /* Keyword.Constant */
|
||||
.kd { color: #66d9ef } /* Keyword.Declaration */
|
||||
.kn { color: #f92672 } /* Keyword.Namespace */
|
||||
.kp { color: #66d9ef } /* Keyword.Pseudo */
|
||||
.kr { color: #66d9ef } /* Keyword.Reserved */
|
||||
.kt { color: #66d9ef } /* Keyword.Type */
|
||||
.ld { color: #e6db74 } /* Literal.Date */
|
||||
.m { color: #ae81ff } /* Literal.Number */
|
||||
.s { color: #e6db74 } /* Literal.String */
|
||||
.na { color: #a6e22e } /* Name.Attribute */
|
||||
.nb { color: #f8f8f2 } /* Name.Builtin */
|
||||
.nc { color: #a6e22e } /* Name.Class */
|
||||
.no { color: #66d9ef } /* Name.Constant */
|
||||
.nd { color: #a6e22e } /* Name.Decorator */
|
||||
.ni { color: #f8f8f2 } /* Name.Entity */
|
||||
.ne { color: #a6e22e } /* Name.Exception */
|
||||
.nf { color: #a6e22e } /* Name.Function */
|
||||
.nl { color: #f8f8f2 } /* Name.Label */
|
||||
.nn { color: #f8f8f2 } /* Name.Namespace */
|
||||
.nx { color: #a6e22e } /* Name.Other */
|
||||
.py { color: #f8f8f2 } /* Name.Property */
|
||||
.nt { color: #f92672 } /* Name.Tag */
|
||||
.nv { color: #f8f8f2 } /* Name.Variable */
|
||||
.ow { color: #f92672 } /* Operator.Word */
|
||||
.w { color: #f8f8f2 } /* Text.Whitespace */
|
||||
.mf { color: #ae81ff } /* Literal.Number.Float */
|
||||
.mh { color: #ae81ff } /* Literal.Number.Hex */
|
||||
.mi { color: #ae81ff } /* Literal.Number.Integer */
|
||||
.mo { color: #ae81ff } /* Literal.Number.Oct */
|
||||
.sb { color: #e6db74 } /* Literal.String.Backtick */
|
||||
.sc { color: #e6db74 } /* Literal.String.Char */
|
||||
.sd { color: #e6db74 } /* Literal.String.Doc */
|
||||
.s2 { color: #e6db74 } /* Literal.String.Double */
|
||||
.se { color: #ae81ff } /* Literal.String.Escape */
|
||||
.sh { color: #e6db74 } /* Literal.String.Heredoc */
|
||||
.si { color: #e6db74 } /* Literal.String.Interpol */
|
||||
.sx { color: #e6db74 } /* Literal.String.Other */
|
||||
.sr { color: #e6db74 } /* Literal.String.Regex */
|
||||
.s1 { color: #e6db74 } /* Literal.String.Single */
|
||||
.ss { color: #e6db74 } /* Literal.String.Symbol */
|
||||
.bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */
|
||||
.vc { color: #f8f8f2 } /* Name.Variable.Class */
|
||||
.vg { color: #f8f8f2 } /* Name.Variable.Global */
|
||||
.vi { color: #f8f8f2 } /* Name.Variable.Instance */
|
||||
.il { color: #ae81ff } /* Literal.Number.Integer.Long */
|
||||
|
||||
.gh { } /* Generic Heading & Diff Header */
|
||||
.gu { color: #75715e; } /* Generic.Subheading & Diff Unified/Comment? */
|
||||
.gd { color: #f92672; } /* Generic.Deleted & Diff Deleted */
|
||||
.gi { color: #a6e22e; } /* Generic.Inserted & Diff Inserted */
|
@ -1,70 +0,0 @@
|
||||
/*
|
||||
Tomorrow Pygments Theme
|
||||
*/
|
||||
|
||||
pre { background: #ffffff; color: #4d4d4c }
|
||||
|
||||
.hll { background-color: #d6d6d6 }
|
||||
.c { color: #8e908c } /* Comment */
|
||||
.err { color: #c82829 } /* Error */
|
||||
.k { color: #8959a8 } /* Keyword */
|
||||
.l { color: #f5871f } /* Literal */
|
||||
.n { color: #4d4d4c } /* Name */
|
||||
.o { color: #3e999f } /* Operator */
|
||||
.p { color: #4d4d4c } /* Punctuation */
|
||||
.cm { color: #8e908c } /* Comment.Multiline */
|
||||
.cp { color: #8e908c } /* Comment.Preproc */
|
||||
.c1 { color: #8e908c } /* Comment.Single */
|
||||
.cs { color: #8e908c } /* Comment.Special */
|
||||
.gd { color: #c82829 } /* Generic.Deleted */
|
||||
.ge { font-style: italic } /* Generic.Emph */
|
||||
.gh { color: #4d4d4c; font-weight: bold } /* Generic.Heading */
|
||||
.gi { color: #718c00 } /* Generic.Inserted */
|
||||
.gp { color: #8e908c; font-weight: bold } /* Generic.Prompt */
|
||||
.gs { font-weight: bold } /* Generic.Strong */
|
||||
.gu { color: #3e999f; font-weight: bold } /* Generic.Subheading */
|
||||
.kc { color: #8959a8 } /* Keyword.Constant */
|
||||
.kd { color: #8959a8 } /* Keyword.Declaration */
|
||||
.kn { color: #3e999f } /* Keyword.Namespace */
|
||||
.kp { color: #8959a8 } /* Keyword.Pseudo */
|
||||
.kr { color: #8959a8 } /* Keyword.Reserved */
|
||||
.kt { color: #eab700 } /* Keyword.Type */
|
||||
.ld { color: #718c00 } /* Literal.Date */
|
||||
.m { color: #f5871f } /* Literal.Number */
|
||||
.s { color: #718c00 } /* Literal.String */
|
||||
.na { color: #4271ae } /* Name.Attribute */
|
||||
.nb { color: #4d4d4c } /* Name.Builtin */
|
||||
.nc { color: #eab700 } /* Name.Class */
|
||||
.no { color: #c82829 } /* Name.Constant */
|
||||
.nd { color: #3e999f } /* Name.Decorator */
|
||||
.ni { color: #4d4d4c } /* Name.Entity */
|
||||
.ne { color: #c82829 } /* Name.Exception */
|
||||
.nf { color: #4271ae } /* Name.Function */
|
||||
.nl { color: #4d4d4c } /* Name.Label */
|
||||
.nn { color: #eab700 } /* Name.Namespace */
|
||||
.nx { color: #4271ae } /* Name.Other */
|
||||
.py { color: #4d4d4c } /* Name.Property */
|
||||
.nt { color: #3e999f } /* Name.Tag */
|
||||
.nv { color: #c82829 } /* Name.Variable */
|
||||
.ow { color: #3e999f } /* Operator.Word */
|
||||
.w { color: #4d4d4c } /* Text.Whitespace */
|
||||
.mf { color: #f5871f } /* Literal.Number.Float */
|
||||
.mh { color: #f5871f } /* Literal.Number.Hex */
|
||||
.mi { color: #f5871f } /* Literal.Number.Integer */
|
||||
.mo { color: #f5871f } /* Literal.Number.Oct */
|
||||
.sb { color: #718c00 } /* Literal.String.Backtick */
|
||||
.sc { color: #4d4d4c } /* Literal.String.Char */
|
||||
.sd { color: #8e908c } /* Literal.String.Doc */
|
||||
.s2 { color: #718c00 } /* Literal.String.Double */
|
||||
.se { color: #f5871f } /* Literal.String.Escape */
|
||||
.sh { color: #718c00 } /* Literal.String.Heredoc */
|
||||
.si { color: #f5871f } /* Literal.String.Interpol */
|
||||
.sx { color: #718c00 } /* Literal.String.Other */
|
||||
.sr { color: #718c00 } /* Literal.String.Regex */
|
||||
.s1 { color: #718c00 } /* Literal.String.Single */
|
||||
.ss { color: #718c00 } /* Literal.String.Symbol */
|
||||
.bp { color: #4d4d4c } /* Name.Builtin.Pseudo */
|
||||
.vc { color: #c82829 } /* Name.Variable.Class */
|
||||
.vg { color: #c82829 } /* Name.Variable.Global */
|
||||
.vi { color: #c82829 } /* Name.Variable.Instance */
|
||||
.il { color: #f5871f } /* Literal.Number.Integer.Long */
|
@ -1,70 +0,0 @@
|
||||
/*
|
||||
Tomorrow Night Pygments Theme
|
||||
*/
|
||||
|
||||
pre { background: #1d1f21; color: #c5c8c6 }
|
||||
|
||||
.hll { background-color: #373b41 }
|
||||
.c { color: #969896 } /* Comment */
|
||||
.err { color: #cc6666 } /* Error */
|
||||
.k { color: #b294bb } /* Keyword */
|
||||
.l { color: #de935f } /* Literal */
|
||||
.n { color: #c5c8c6 } /* Name */
|
||||
.o { color: #8abeb7 } /* Operator */
|
||||
.p { color: #c5c8c6 } /* Punctuation */
|
||||
.cm { color: #969896 } /* Comment.Multiline */
|
||||
.cp { color: #969896 } /* Comment.Preproc */
|
||||
.c1 { color: #969896 } /* Comment.Single */
|
||||
.cs { color: #969896 } /* Comment.Special */
|
||||
.gd { color: #cc6666 } /* Generic.Deleted */
|
||||
.ge { font-style: italic } /* Generic.Emph */
|
||||
.gh { color: #c5c8c6; font-weight: bold } /* Generic.Heading */
|
||||
.gi { color: #b5bd68 } /* Generic.Inserted */
|
||||
.gp { color: #969896; font-weight: bold } /* Generic.Prompt */
|
||||
.gs { font-weight: bold } /* Generic.Strong */
|
||||
.gu { color: #8abeb7; font-weight: bold } /* Generic.Subheading */
|
||||
.kc { color: #b294bb } /* Keyword.Constant */
|
||||
.kd { color: #b294bb } /* Keyword.Declaration */
|
||||
.kn { color: #8abeb7 } /* Keyword.Namespace */
|
||||
.kp { color: #b294bb } /* Keyword.Pseudo */
|
||||
.kr { color: #b294bb } /* Keyword.Reserved */
|
||||
.kt { color: #f0c674 } /* Keyword.Type */
|
||||
.ld { color: #b5bd68 } /* Literal.Date */
|
||||
.m { color: #de935f } /* Literal.Number */
|
||||
.s { color: #b5bd68 } /* Literal.String */
|
||||
.na { color: #81a2be } /* Name.Attribute */
|
||||
.nb { color: #c5c8c6 } /* Name.Builtin */
|
||||
.nc { color: #f0c674 } /* Name.Class */
|
||||
.no { color: #cc6666 } /* Name.Constant */
|
||||
.nd { color: #8abeb7 } /* Name.Decorator */
|
||||
.ni { color: #c5c8c6 } /* Name.Entity */
|
||||
.ne { color: #cc6666 } /* Name.Exception */
|
||||
.nf { color: #81a2be } /* Name.Function */
|
||||
.nl { color: #c5c8c6 } /* Name.Label */
|
||||
.nn { color: #f0c674 } /* Name.Namespace */
|
||||
.nx { color: #81a2be } /* Name.Other */
|
||||
.py { color: #c5c8c6 } /* Name.Property */
|
||||
.nt { color: #8abeb7 } /* Name.Tag */
|
||||
.nv { color: #cc6666 } /* Name.Variable */
|
||||
.ow { color: #8abeb7 } /* Operator.Word */
|
||||
.w { color: #c5c8c6 } /* Text.Whitespace */
|
||||
.mf { color: #de935f } /* Literal.Number.Float */
|
||||
.mh { color: #de935f } /* Literal.Number.Hex */
|
||||
.mi { color: #de935f } /* Literal.Number.Integer */
|
||||
.mo { color: #de935f } /* Literal.Number.Oct */
|
||||
.sb { color: #b5bd68 } /* Literal.String.Backtick */
|
||||
.sc { color: #c5c8c6 } /* Literal.String.Char */
|
||||
.sd { color: #969896 } /* Literal.String.Doc */
|
||||
.s2 { color: #b5bd68 } /* Literal.String.Double */
|
||||
.se { color: #de935f } /* Literal.String.Escape */
|
||||
.sh { color: #b5bd68 } /* Literal.String.Heredoc */
|
||||
.si { color: #de935f } /* Literal.String.Interpol */
|
||||
.sx { color: #b5bd68 } /* Literal.String.Other */
|
||||
.sr { color: #b5bd68 } /* Literal.String.Regex */
|
||||
.s1 { color: #b5bd68 } /* Literal.String.Single */
|
||||
.ss { color: #b5bd68 } /* Literal.String.Symbol */
|
||||
.bp { color: #c5c8c6 } /* Name.Builtin.Pseudo */
|
||||
.vc { color: #cc6666 } /* Name.Variable.Class */
|
||||
.vg { color: #cc6666 } /* Name.Variable.Global */
|
||||
.vi { color: #cc6666 } /* Name.Variable.Instance */
|
||||
.il { color: #de935f } /* Literal.Number.Integer.Long */
|
@ -1,229 +0,0 @@
|
||||
<?xml version="1.0" standalone="no"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" >
|
||||
<svg xmlns="http://www.w3.org/2000/svg">
|
||||
<metadata></metadata>
|
||||
<defs>
|
||||
<font id="glyphicons_halflingsregular" horiz-adv-x="1200" >
|
||||
<font-face units-per-em="1200" ascent="960" descent="-240" />
|
||||
<missing-glyph horiz-adv-x="500" />
|
||||
<glyph />
|
||||
<glyph />
|
||||
<glyph unicode="
" />
|
||||
<glyph unicode=" " />
|
||||
<glyph unicode="*" d="M100 500v200h259l-183 183l141 141l183 -183v259h200v-259l183 183l141 -141l-183 -183h259v-200h-259l183 -183l-141 -141l-183 183v-259h-200v259l-183 -183l-141 141l183 183h-259z" />
|
||||
<glyph unicode="+" d="M0 400v300h400v400h300v-400h400v-300h-400v-400h-300v400h-400z" />
|
||||
<glyph unicode=" " />
|
||||
<glyph unicode=" " horiz-adv-x="652" />
|
||||
<glyph unicode=" " horiz-adv-x="1304" />
|
||||
<glyph unicode=" " horiz-adv-x="652" />
|
||||
<glyph unicode=" " horiz-adv-x="1304" />
|
||||
<glyph unicode=" " horiz-adv-x="434" />
|
||||
<glyph unicode=" " horiz-adv-x="326" />
|
||||
<glyph unicode=" " horiz-adv-x="217" />
|
||||
<glyph unicode=" " horiz-adv-x="217" />
|
||||
<glyph unicode=" " horiz-adv-x="163" />
|
||||
<glyph unicode=" " horiz-adv-x="260" />
|
||||
<glyph unicode=" " horiz-adv-x="72" />
|
||||
<glyph unicode=" " horiz-adv-x="260" />
|
||||
<glyph unicode=" " horiz-adv-x="326" />
|
||||
<glyph unicode="€" d="M100 500l100 100h113q0 47 5 100h-218l100 100h135q37 167 112 257q117 141 297 141q242 0 354 -189q60 -103 66 -209h-181q0 55 -25.5 99t-63.5 68t-75 36.5t-67 12.5q-24 0 -52.5 -10t-62.5 -32t-65.5 -67t-50.5 -107h379l-100 -100h-300q-6 -46 -6 -100h406l-100 -100 h-300q9 -74 33 -132t52.5 -91t62 -54.5t59 -29t46.5 -7.5q29 0 66 13t75 37t63.5 67.5t25.5 96.5h174q-31 -172 -128 -278q-107 -117 -274 -117q-205 0 -324 158q-36 46 -69 131.5t-45 205.5h-217z" />
|
||||
<glyph unicode="−" d="M200 400h900v300h-900v-300z" />
|
||||
<glyph unicode="◼" horiz-adv-x="500" d="M0 0z" />
|
||||
<glyph unicode="☁" d="M-14 494q0 -80 56.5 -137t135.5 -57h750q120 0 205 86.5t85 207.5t-85 207t-205 86q-46 0 -90 -14q-44 97 -134.5 156.5t-200.5 59.5q-152 0 -260 -107.5t-108 -260.5q0 -25 2 -37q-66 -14 -108.5 -67.5t-42.5 -122.5z" />
|
||||
<glyph unicode="✉" d="M0 100l400 400l200 -200l200 200l400 -400h-1200zM0 300v600l300 -300zM0 1100l600 -603l600 603h-1200zM900 600l300 300v-600z" />
|
||||
<glyph unicode="✏" d="M-13 -13l333 112l-223 223zM187 403l214 -214l614 614l-214 214zM887 1103l214 -214l99 92q13 13 13 32.5t-13 33.5l-153 153q-15 13 -33 13t-33 -13z" />
|
||||
<glyph unicode="" d="M0 1200h1200l-500 -550v-550h300v-100h-800v100h300v550z" />
|
||||
<glyph unicode="" d="M14 84q18 -55 86 -75.5t147 5.5q65 21 109 69t44 90v606l600 155v-521q-64 16 -138 -7q-79 -26 -122.5 -83t-25.5 -111q18 -55 86 -75.5t147 4.5q70 23 111.5 63.5t41.5 95.5v881q0 10 -7 15.5t-17 2.5l-752 -193q-10 -3 -17 -12.5t-7 -19.5v-689q-64 17 -138 -7 q-79 -25 -122.5 -82t-25.5 -112z" />
|
||||
<glyph unicode="" d="M23 693q0 200 142 342t342 142t342 -142t142 -342q0 -142 -78 -261l300 -300q7 -8 7 -18t-7 -18l-109 -109q-8 -7 -18 -7t-18 7l-300 300q-119 -78 -261 -78q-200 0 -342 142t-142 342zM176 693q0 -136 97 -233t234 -97t233.5 96.5t96.5 233.5t-96.5 233.5t-233.5 96.5 t-234 -97t-97 -233z" />
|
||||
<glyph unicode="" d="M100 784q0 64 28 123t73 100.5t104.5 64t119 20.5t120 -38.5t104.5 -104.5q48 69 109.5 105t121.5 38t118.5 -20.5t102.5 -64t71 -100.5t27 -123q0 -57 -33.5 -117.5t-94 -124.5t-126.5 -127.5t-150 -152.5t-146 -174q-62 85 -145.5 174t-149.5 152.5t-126.5 127.5 t-94 124.5t-33.5 117.5z" />
|
||||
<glyph unicode="" d="M-72 800h479l146 400h2l146 -400h472l-382 -278l145 -449l-384 275l-382 -275l146 447zM168 71l2 1z" />
|
||||
<glyph unicode="" d="M-72 800h479l146 400h2l146 -400h472l-382 -278l145 -449l-384 275l-382 -275l146 447zM168 71l2 1zM237 700l196 -142l-73 -226l192 140l195 -141l-74 229l193 140h-235l-77 211l-78 -211h-239z" />
|
||||
<glyph unicode="" d="M0 0v143l400 257v100q-37 0 -68.5 74.5t-31.5 125.5v200q0 124 88 212t212 88t212 -88t88 -212v-200q0 -51 -31.5 -125.5t-68.5 -74.5v-100l400 -257v-143h-1200z" />
|
||||
<glyph unicode="" d="M0 0v1100h1200v-1100h-1200zM100 100h100v100h-100v-100zM100 300h100v100h-100v-100zM100 500h100v100h-100v-100zM100 700h100v100h-100v-100zM100 900h100v100h-100v-100zM300 100h600v400h-600v-400zM300 600h600v400h-600v-400zM1000 100h100v100h-100v-100z M1000 300h100v100h-100v-100zM1000 500h100v100h-100v-100zM1000 700h100v100h-100v-100zM1000 900h100v100h-100v-100z" />
|
||||
<glyph unicode="" d="M0 50v400q0 21 14.5 35.5t35.5 14.5h400q21 0 35.5 -14.5t14.5 -35.5v-400q0 -21 -14.5 -35.5t-35.5 -14.5h-400q-21 0 -35.5 14.5t-14.5 35.5zM0 650v400q0 21 14.5 35.5t35.5 14.5h400q21 0 35.5 -14.5t14.5 -35.5v-400q0 -21 -14.5 -35.5t-35.5 -14.5h-400 q-21 0 -35.5 14.5t-14.5 35.5zM600 50v400q0 21 14.5 35.5t35.5 14.5h400q21 0 35.5 -14.5t14.5 -35.5v-400q0 -21 -14.5 -35.5t-35.5 -14.5h-400q-21 0 -35.5 14.5t-14.5 35.5zM600 650v400q0 21 14.5 35.5t35.5 14.5h400q21 0 35.5 -14.5t14.5 -35.5v-400 q0 -21 -14.5 -35.5t-35.5 -14.5h-400q-21 0 -35.5 14.5t-14.5 35.5z" />
|
||||
<glyph unicode="" d="M0 50v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM0 450v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200 q-21 0 -35.5 14.5t-14.5 35.5zM0 850v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM400 50v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5 t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM400 450v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM400 850v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5 v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM800 50v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM800 450v200q0 21 14.5 35.5t35.5 14.5h200 q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM800 850v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5z" />
|
||||
<glyph unicode="" d="M0 50v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM0 450q0 -21 14.5 -35.5t35.5 -14.5h200q21 0 35.5 14.5t14.5 35.5v200q0 21 -14.5 35.5t-35.5 14.5h-200q-21 0 -35.5 -14.5 t-14.5 -35.5v-200zM0 850v200q0 21 14.5 35.5t35.5 14.5h200q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5zM400 50v200q0 21 14.5 35.5t35.5 14.5h700q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5 t-35.5 -14.5h-700q-21 0 -35.5 14.5t-14.5 35.5zM400 450v200q0 21 14.5 35.5t35.5 14.5h700q21 0 35.5 -14.5t14.5 -35.5v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-700q-21 0 -35.5 14.5t-14.5 35.5zM400 850v200q0 21 14.5 35.5t35.5 14.5h700q21 0 35.5 -14.5t14.5 -35.5 v-200q0 -21 -14.5 -35.5t-35.5 -14.5h-700q-21 0 -35.5 14.5t-14.5 35.5z" />
|
||||
<glyph unicode="" d="M29 454l419 -420l818 820l-212 212l-607 -607l-206 207z" />
|
||||
<glyph unicode="" d="M106 318l282 282l-282 282l212 212l282 -282l282 282l212 -212l-282 -282l282 -282l-212 -212l-282 282l-282 -282z" />
|
||||
<glyph unicode="" d="M23 693q0 200 142 342t342 142t342 -142t142 -342q0 -142 -78 -261l300 -300q7 -8 7 -18t-7 -18l-109 -109q-8 -7 -18 -7t-18 7l-300 300q-119 -78 -261 -78q-200 0 -342 142t-142 342zM176 693q0 -136 97 -233t234 -97t233.5 96.5t96.5 233.5t-96.5 233.5t-233.5 96.5 t-234 -97t-97 -233zM300 600v200h100v100h200v-100h100v-200h-100v-100h-200v100h-100z" />
|
||||
<glyph unicode="" d="M23 694q0 200 142 342t342 142t342 -142t142 -342q0 -141 -78 -262l300 -299q7 -7 7 -18t-7 -18l-109 -109q-8 -8 -18 -8t-18 8l-300 300q-119 -78 -261 -78q-200 0 -342 142t-142 342zM176 694q0 -136 97 -233t234 -97t233.5 97t96.5 233t-96.5 233t-233.5 97t-234 -97 t-97 -233zM300 601h400v200h-400v-200z" />
|
||||
<glyph unicode="" d="M23 600q0 183 105 331t272 210v-166q-103 -55 -165 -155t-62 -220q0 -177 125 -302t302 -125t302 125t125 302q0 120 -62 220t-165 155v166q167 -62 272 -210t105 -331q0 -118 -45.5 -224.5t-123 -184t-184 -123t-224.5 -45.5t-224.5 45.5t-184 123t-123 184t-45.5 224.5 zM500 750q0 -21 14.5 -35.5t35.5 -14.5h100q21 0 35.5 14.5t14.5 35.5v400q0 21 -14.5 35.5t-35.5 14.5h-100q-21 0 -35.5 -14.5t-14.5 -35.5v-400z" />
|
||||
<glyph unicode="" d="M100 1h200v300h-200v-300zM400 1v500h200v-500h-200zM700 1v800h200v-800h-200zM1000 1v1200h200v-1200h-200z" />
|
||||
<glyph unicode="" d="M26 601q0 -33 6 -74l151 -38l2 -6q14 -49 38 -93l3 -5l-80 -134q45 -59 105 -105l133 81l5 -3q45 -26 94 -39l5 -2l38 -151q40 -5 74 -5q27 0 74 5l38 151l6 2q46 13 93 39l5 3l134 -81q56 44 104 105l-80 134l3 5q24 44 39 93l1 6l152 38q5 40 5 74q0 28 -5 73l-152 38 l-1 6q-16 51 -39 93l-3 5l80 134q-44 58 -104 105l-134 -81l-5 3q-45 25 -93 39l-6 1l-38 152q-40 5 -74 5q-27 0 -74 -5l-38 -152l-5 -1q-50 -14 -94 -39l-5 -3l-133 81q-59 -47 -105 -105l80 -134l-3 -5q-25 -47 -38 -93l-2 -6l-151 -38q-6 -48 -6 -73zM385 601 q0 88 63 151t152 63t152 -63t63 -151q0 -89 -63 -152t-152 -63t-152 63t-63 152z" />
|
||||
<glyph unicode="" d="M100 1025v50q0 10 7.5 17.5t17.5 7.5h275v100q0 41 29.5 70.5t70.5 29.5h300q41 0 70.5 -29.5t29.5 -70.5v-100h275q10 0 17.5 -7.5t7.5 -17.5v-50q0 -11 -7 -18t-18 -7h-1050q-11 0 -18 7t-7 18zM200 100v800h900v-800q0 -41 -29.5 -71t-70.5 -30h-700q-41 0 -70.5 30 t-29.5 71zM300 100h100v700h-100v-700zM500 100h100v700h-100v-700zM500 1100h300v100h-300v-100zM700 100h100v700h-100v-700zM900 100h100v700h-100v-700z" />
|
||||
<glyph unicode="" d="M1 601l656 644l644 -644h-200v-600h-300v400h-300v-400h-300v600h-200z" />
|
||||
<glyph unicode="" d="M100 25v1150q0 11 7 18t18 7h475v-500h400v-675q0 -11 -7 -18t-18 -7h-850q-11 0 -18 7t-7 18zM700 800v300l300 -300h-300z" />
|
||||
<glyph unicode="" d="M4 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM186 600q0 -171 121.5 -292.5t292.5 -121.5t292.5 121.5t121.5 292.5t-121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM500 500v400h100 v-300h200v-100h-300z" />
|
||||
<glyph unicode="" d="M-100 0l431 1200h209l-21 -300h162l-20 300h208l431 -1200h-538l-41 400h-242l-40 -400h-539zM488 500h224l-27 300h-170z" />
|
||||
<glyph unicode="" d="M0 0v400h490l-290 300h200v500h300v-500h200l-290 -300h490v-400h-1100zM813 200h175v100h-175v-100z" />
|
||||
<glyph unicode="" d="M1 600q0 122 47.5 233t127.5 191t191 127.5t233 47.5t233 -47.5t191 -127.5t127.5 -191t47.5 -233t-47.5 -233t-127.5 -191t-191 -127.5t-233 -47.5t-233 47.5t-191 127.5t-127.5 191t-47.5 233zM188 600q0 -170 121 -291t291 -121t291 121t121 291t-121 291t-291 121 t-291 -121t-121 -291zM350 600h150v300h200v-300h150l-250 -300z" />
|
||||
<glyph unicode="" d="M4 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM186 600q0 -171 121.5 -292.5t292.5 -121.5t292.5 121.5t121.5 292.5t-121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM350 600l250 300 l250 -300h-150v-300h-200v300h-150z" />
|
||||
<glyph unicode="" d="M0 25v475l200 700h800l199 -700l1 -475q0 -11 -7 -18t-18 -7h-1150q-11 0 -18 7t-7 18zM200 500h200l50 -200h300l50 200h200l-97 500h-606z" />
|
||||
<glyph unicode="" d="M4 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM186 600q0 -172 121.5 -293t292.5 -121t292.5 121t121.5 293q0 171 -121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM500 397v401 l297 -200z" />
|
||||
<glyph unicode="" d="M23 600q0 -118 45.5 -224.5t123 -184t184 -123t224.5 -45.5t224.5 45.5t184 123t123 184t45.5 224.5h-150q0 -177 -125 -302t-302 -125t-302 125t-125 302t125 302t302 125q136 0 246 -81l-146 -146h400v400l-145 -145q-157 122 -355 122q-118 0 -224.5 -45.5t-184 -123 t-123 -184t-45.5 -224.5z" />
|
||||
<glyph unicode="" d="M23 600q0 118 45.5 224.5t123 184t184 123t224.5 45.5q198 0 355 -122l145 145v-400h-400l147 147q-112 80 -247 80q-177 0 -302 -125t-125 -302h-150zM100 0v400h400l-147 -147q112 -80 247 -80q177 0 302 125t125 302h150q0 -118 -45.5 -224.5t-123 -184t-184 -123 t-224.5 -45.5q-198 0 -355 122z" />
|
||||
<glyph unicode="" d="M100 0h1100v1200h-1100v-1200zM200 100v900h900v-900h-900zM300 200v100h100v-100h-100zM300 400v100h100v-100h-100zM300 600v100h100v-100h-100zM300 800v100h100v-100h-100zM500 200h500v100h-500v-100zM500 400v100h500v-100h-500zM500 600v100h500v-100h-500z M500 800v100h500v-100h-500z" />
|
||||
<glyph unicode="" d="M0 100v600q0 41 29.5 70.5t70.5 29.5h100v200q0 82 59 141t141 59h300q82 0 141 -59t59 -141v-200h100q41 0 70.5 -29.5t29.5 -70.5v-600q0 -41 -29.5 -70.5t-70.5 -29.5h-900q-41 0 -70.5 29.5t-29.5 70.5zM400 800h300v150q0 21 -14.5 35.5t-35.5 14.5h-200 q-21 0 -35.5 -14.5t-14.5 -35.5v-150z" />
|
||||
<glyph unicode="" d="M100 0v1100h100v-1100h-100zM300 400q60 60 127.5 84t127.5 17.5t122 -23t119 -30t110 -11t103 42t91 120.5v500q-40 -81 -101.5 -115.5t-127.5 -29.5t-138 25t-139.5 40t-125.5 25t-103 -29.5t-65 -115.5v-500z" />
|
||||
<glyph unicode="" d="M0 275q0 -11 7 -18t18 -7h50q11 0 18 7t7 18v300q0 127 70.5 231.5t184.5 161.5t245 57t245 -57t184.5 -161.5t70.5 -231.5v-300q0 -11 7 -18t18 -7h50q11 0 18 7t7 18v300q0 116 -49.5 227t-131 192.5t-192.5 131t-227 49.5t-227 -49.5t-192.5 -131t-131 -192.5 t-49.5 -227v-300zM200 20v460q0 8 6 14t14 6h160q8 0 14 -6t6 -14v-460q0 -8 -6 -14t-14 -6h-160q-8 0 -14 6t-6 14zM800 20v460q0 8 6 14t14 6h160q8 0 14 -6t6 -14v-460q0 -8 -6 -14t-14 -6h-160q-8 0 -14 6t-6 14z" />
|
||||
<glyph unicode="" d="M0 400h300l300 -200v800l-300 -200h-300v-400zM688 459l141 141l-141 141l71 71l141 -141l141 141l71 -71l-141 -141l141 -141l-71 -71l-141 141l-141 -141z" />
|
||||
<glyph unicode="" d="M0 400h300l300 -200v800l-300 -200h-300v-400zM700 857l69 53q111 -135 111 -310q0 -169 -106 -302l-67 54q86 110 86 248q0 146 -93 257z" />
|
||||
<glyph unicode="" d="M0 401v400h300l300 200v-800l-300 200h-300zM702 858l69 53q111 -135 111 -310q0 -170 -106 -303l-67 55q86 110 86 248q0 145 -93 257zM889 951l7 -8q123 -151 123 -344q0 -189 -119 -339l-7 -8l81 -66l6 8q142 178 142 405q0 230 -144 408l-6 8z" />
|
||||
<glyph unicode="" d="M0 0h500v500h-200v100h-100v-100h-200v-500zM0 600h100v100h400v100h100v100h-100v300h-500v-600zM100 100v300h300v-300h-300zM100 800v300h300v-300h-300zM200 200v100h100v-100h-100zM200 900h100v100h-100v-100zM500 500v100h300v-300h200v-100h-100v-100h-200v100 h-100v100h100v200h-200zM600 0v100h100v-100h-100zM600 1000h100v-300h200v-300h300v200h-200v100h200v500h-600v-200zM800 800v300h300v-300h-300zM900 0v100h300v-100h-300zM900 900v100h100v-100h-100zM1100 200v100h100v-100h-100z" />
|
||||
<glyph unicode="" d="M0 200h100v1000h-100v-1000zM100 0v100h300v-100h-300zM200 200v1000h100v-1000h-100zM500 0v91h100v-91h-100zM500 200v1000h200v-1000h-200zM700 0v91h100v-91h-100zM800 200v1000h100v-1000h-100zM900 0v91h200v-91h-200zM1000 200v1000h200v-1000h-200z" />
|
||||
<glyph unicode="" d="M0 700l1 475q0 10 7.5 17.5t17.5 7.5h474l700 -700l-500 -500zM148 953q0 -42 29 -71q30 -30 71.5 -30t71.5 30q29 29 29 71t-29 71q-30 30 -71.5 30t-71.5 -30q-29 -29 -29 -71z" />
|
||||
<glyph unicode="" d="M1 700l1 475q0 11 7 18t18 7h474l700 -700l-500 -500zM148 953q0 -42 30 -71q29 -30 71 -30t71 30q30 29 30 71t-30 71q-29 30 -71 30t-71 -30q-30 -29 -30 -71zM701 1200h100l700 -700l-500 -500l-50 50l450 450z" />
|
||||
<glyph unicode="" d="M100 0v1025l175 175h925v-1000l-100 -100v1000h-750l-100 -100h750v-1000h-900z" />
|
||||
<glyph unicode="" d="M200 0l450 444l450 -443v1150q0 20 -14.5 35t-35.5 15h-800q-21 0 -35.5 -15t-14.5 -35v-1151z" />
|
||||
<glyph unicode="" d="M0 100v700h200l100 -200h600l100 200h200v-700h-200v200h-800v-200h-200zM253 829l40 -124h592l62 124l-94 346q-2 11 -10 18t-18 7h-450q-10 0 -18 -7t-10 -18zM281 24l38 152q2 10 11.5 17t19.5 7h500q10 0 19.5 -7t11.5 -17l38 -152q2 -10 -3.5 -17t-15.5 -7h-600 q-10 0 -15.5 7t-3.5 17z" />
|
||||
<glyph unicode="" d="M0 200q0 -41 29.5 -70.5t70.5 -29.5h1000q41 0 70.5 29.5t29.5 70.5v600q0 41 -29.5 70.5t-70.5 29.5h-150q-4 8 -11.5 21.5t-33 48t-53 61t-69 48t-83.5 21.5h-200q-41 0 -82 -20.5t-70 -50t-52 -59t-34 -50.5l-12 -20h-150q-41 0 -70.5 -29.5t-29.5 -70.5v-600z M356 500q0 100 72 172t172 72t172 -72t72 -172t-72 -172t-172 -72t-172 72t-72 172zM494 500q0 -44 31 -75t75 -31t75 31t31 75t-31 75t-75 31t-75 -31t-31 -75zM900 700v100h100v-100h-100z" />
|
||||
<glyph unicode="" d="M53 0h365v66q-41 0 -72 11t-49 38t1 71l92 234h391l82 -222q16 -45 -5.5 -88.5t-74.5 -43.5v-66h417v66q-34 1 -74 43q-18 19 -33 42t-21 37l-6 13l-385 998h-93l-399 -1006q-24 -48 -52 -75q-12 -12 -33 -25t-36 -20l-15 -7v-66zM416 521l178 457l46 -140l116 -317h-340 z" />
|
||||
<glyph unicode="" d="M100 0v89q41 7 70.5 32.5t29.5 65.5v827q0 28 -1 39.5t-5.5 26t-15.5 21t-29 14t-49 14.5v71l471 -1q120 0 213 -88t93 -228q0 -55 -11.5 -101.5t-28 -74t-33.5 -47.5t-28 -28l-12 -7q8 -3 21.5 -9t48 -31.5t60.5 -58t47.5 -91.5t21.5 -129q0 -84 -59 -156.5t-142 -111 t-162 -38.5h-500zM400 200h161q89 0 153 48.5t64 132.5q0 90 -62.5 154.5t-156.5 64.5h-159v-400zM400 700h139q76 0 130 61.5t54 138.5q0 82 -84 130.5t-239 48.5v-379z" />
|
||||
<glyph unicode="" d="M200 0v57q77 7 134.5 40.5t65.5 80.5l173 849q10 56 -10 74t-91 37q-6 1 -10.5 2.5t-9.5 2.5v57h425l2 -57q-33 -8 -62 -25.5t-46 -37t-29.5 -38t-17.5 -30.5l-5 -12l-128 -825q-10 -52 14 -82t95 -36v-57h-500z" />
|
||||
<glyph unicode="" d="M-75 200h75v800h-75l125 167l125 -167h-75v-800h75l-125 -167zM300 900v300h150h700h150v-300h-50q0 29 -8 48.5t-18.5 30t-33.5 15t-39.5 5.5t-50.5 1h-200v-850l100 -50v-100h-400v100l100 50v850h-200q-34 0 -50.5 -1t-40 -5.5t-33.5 -15t-18.5 -30t-8.5 -48.5h-49z " />
|
||||
<glyph unicode="" d="M33 51l167 125v-75h800v75l167 -125l-167 -125v75h-800v-75zM100 901v300h150h700h150v-300h-50q0 29 -8 48.5t-18 30t-33.5 15t-40 5.5t-50.5 1h-200v-650l100 -50v-100h-400v100l100 50v650h-200q-34 0 -50.5 -1t-39.5 -5.5t-33.5 -15t-18.5 -30t-8 -48.5h-50z" />
|
||||
<glyph unicode="" d="M0 50q0 -20 14.5 -35t35.5 -15h1100q21 0 35.5 15t14.5 35v100q0 21 -14.5 35.5t-35.5 14.5h-1100q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM0 350q0 -20 14.5 -35t35.5 -15h800q21 0 35.5 15t14.5 35v100q0 21 -14.5 35.5t-35.5 14.5h-800q-21 0 -35.5 -14.5t-14.5 -35.5 v-100zM0 650q0 -20 14.5 -35t35.5 -15h1000q21 0 35.5 15t14.5 35v100q0 21 -14.5 35.5t-35.5 14.5h-1000q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM0 950q0 -20 14.5 -35t35.5 -15h600q21 0 35.5 15t14.5 35v100q0 21 -14.5 35.5t-35.5 14.5h-600q-21 0 -35.5 -14.5 t-14.5 -35.5v-100z" />
|
||||
<glyph unicode="" d="M0 50q0 -20 14.5 -35t35.5 -15h1100q21 0 35.5 15t14.5 35v100q0 21 -14.5 35.5t-35.5 14.5h-1100q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM0 650q0 -20 14.5 -35t35.5 -15h1100q21 0 35.5 15t14.5 35v100q0 21 -14.5 35.5t-35.5 14.5h-1100q-21 0 -35.5 -14.5t-14.5 -35.5 v-100zM200 350q0 -20 14.5 -35t35.5 -15h700q21 0 35.5 15t14.5 35v100q0 21 -14.5 35.5t-35.5 14.5h-700q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM200 950q0 -20 14.5 -35t35.5 -15h700q21 0 35.5 15t14.5 35v100q0 21 -14.5 35.5t-35.5 14.5h-700q-21 0 -35.5 -14.5 t-14.5 -35.5v-100z" />
|
||||
<glyph unicode="" d="M0 50v100q0 21 14.5 35.5t35.5 14.5h1100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-1100q-21 0 -35.5 15t-14.5 35zM100 650v100q0 21 14.5 35.5t35.5 14.5h1000q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-1000q-21 0 -35.5 15 t-14.5 35zM300 350v100q0 21 14.5 35.5t35.5 14.5h800q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-800q-21 0 -35.5 15t-14.5 35zM500 950v100q0 21 14.5 35.5t35.5 14.5h600q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-600 q-21 0 -35.5 15t-14.5 35z" />
|
||||
<glyph unicode="" d="M0 50v100q0 21 14.5 35.5t35.5 14.5h1100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-1100q-21 0 -35.5 15t-14.5 35zM0 350v100q0 21 14.5 35.5t35.5 14.5h1100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-1100q-21 0 -35.5 15 t-14.5 35zM0 650v100q0 21 14.5 35.5t35.5 14.5h1100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-1100q-21 0 -35.5 15t-14.5 35zM0 950v100q0 21 14.5 35.5t35.5 14.5h1100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-1100 q-21 0 -35.5 15t-14.5 35z" />
|
||||
<glyph unicode="" d="M0 50v100q0 21 14.5 35.5t35.5 14.5h100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-100q-21 0 -35.5 15t-14.5 35zM0 350v100q0 21 14.5 35.5t35.5 14.5h100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-100q-21 0 -35.5 15 t-14.5 35zM0 650v100q0 21 14.5 35.5t35.5 14.5h100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-100q-21 0 -35.5 15t-14.5 35zM0 950v100q0 21 14.5 35.5t35.5 14.5h100q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-100q-21 0 -35.5 15 t-14.5 35zM300 50v100q0 21 14.5 35.5t35.5 14.5h800q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-800q-21 0 -35.5 15t-14.5 35zM300 350v100q0 21 14.5 35.5t35.5 14.5h800q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-800 q-21 0 -35.5 15t-14.5 35zM300 650v100q0 21 14.5 35.5t35.5 14.5h800q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15h-800q-21 0 -35.5 15t-14.5 35zM300 950v100q0 21 14.5 35.5t35.5 14.5h800q21 0 35.5 -14.5t14.5 -35.5v-100q0 -20 -14.5 -35t-35.5 -15 h-800q-21 0 -35.5 15t-14.5 35z" />
|
||||
<glyph unicode="" d="M-101 500v100h201v75l166 -125l-166 -125v75h-201zM300 0h100v1100h-100v-1100zM500 50q0 -20 14.5 -35t35.5 -15h600q20 0 35 15t15 35v100q0 21 -15 35.5t-35 14.5h-600q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM500 350q0 -20 14.5 -35t35.5 -15h300q20 0 35 15t15 35 v100q0 21 -15 35.5t-35 14.5h-300q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM500 650q0 -20 14.5 -35t35.5 -15h500q20 0 35 15t15 35v100q0 21 -15 35.5t-35 14.5h-500q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM500 950q0 -20 14.5 -35t35.5 -15h100q20 0 35 15t15 35v100 q0 21 -15 35.5t-35 14.5h-100q-21 0 -35.5 -14.5t-14.5 -35.5v-100z" />
|
||||
<glyph unicode="" d="M1 50q0 -20 14.5 -35t35.5 -15h600q20 0 35 15t15 35v100q0 21 -15 35.5t-35 14.5h-600q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM1 350q0 -20 14.5 -35t35.5 -15h300q20 0 35 15t15 35v100q0 21 -15 35.5t-35 14.5h-300q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM1 650 q0 -20 14.5 -35t35.5 -15h500q20 0 35 15t15 35v100q0 21 -15 35.5t-35 14.5h-500q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM1 950q0 -20 14.5 -35t35.5 -15h100q20 0 35 15t15 35v100q0 21 -15 35.5t-35 14.5h-100q-21 0 -35.5 -14.5t-14.5 -35.5v-100zM801 0v1100h100v-1100 h-100zM934 550l167 -125v75h200v100h-200v75z" />
|
||||
<glyph unicode="" d="M0 275v650q0 31 22 53t53 22h750q31 0 53 -22t22 -53v-650q0 -31 -22 -53t-53 -22h-750q-31 0 -53 22t-22 53zM900 600l300 300v-600z" />
|
||||
<glyph unicode="" d="M0 44v1012q0 18 13 31t31 13h1112q19 0 31.5 -13t12.5 -31v-1012q0 -18 -12.5 -31t-31.5 -13h-1112q-18 0 -31 13t-13 31zM100 263l247 182l298 -131l-74 156l293 318l236 -288v500h-1000v-737zM208 750q0 56 39 95t95 39t95 -39t39 -95t-39 -95t-95 -39t-95 39t-39 95z " />
|
||||
<glyph unicode="" d="M148 745q0 124 60.5 231.5t165 172t226.5 64.5q123 0 227 -63t164.5 -169.5t60.5 -229.5t-73 -272q-73 -114 -166.5 -237t-150.5 -189l-57 -66q-10 9 -27 26t-66.5 70.5t-96 109t-104 135.5t-100.5 155q-63 139 -63 262zM342 772q0 -107 75.5 -182.5t181.5 -75.5 q107 0 182.5 75.5t75.5 182.5t-75.5 182t-182.5 75t-182 -75.5t-75 -181.5z" />
|
||||
<glyph unicode="" d="M1 600q0 122 47.5 233t127.5 191t191 127.5t233 47.5t233 -47.5t191 -127.5t127.5 -191t47.5 -233t-47.5 -233t-127.5 -191t-191 -127.5t-233 -47.5t-233 47.5t-191 127.5t-127.5 191t-47.5 233zM173 600q0 -177 125.5 -302t301.5 -125v854q-176 0 -301.5 -125 t-125.5 -302z" />
|
||||
<glyph unicode="" d="M117 406q0 94 34 186t88.5 172.5t112 159t115 177t87.5 194.5q21 -71 57.5 -142.5t76 -130.5t83 -118.5t82 -117t70 -116t50 -125.5t18.5 -136q0 -89 -39 -165.5t-102 -126.5t-140 -79.5t-156 -33.5q-114 6 -211.5 53t-161.5 139t-64 210zM243 414q14 -82 59.5 -136 t136.5 -80l16 98q-7 6 -18 17t-34 48t-33 77q-15 73 -14 143.5t10 122.5l9 51q-92 -110 -119.5 -185t-12.5 -156z" />
|
||||
<glyph unicode="" d="M0 400v300q0 165 117.5 282.5t282.5 117.5q366 -6 397 -14l-186 -186h-311q-41 0 -70.5 -29.5t-29.5 -70.5v-500q0 -41 29.5 -70.5t70.5 -29.5h500q41 0 70.5 29.5t29.5 70.5v125l200 200v-225q0 -165 -117.5 -282.5t-282.5 -117.5h-300q-165 0 -282.5 117.5 t-117.5 282.5zM436 341l161 50l412 412l-114 113l-405 -405zM995 1015l113 -113l113 113l-21 85l-92 28z" />
|
||||
<glyph unicode="" d="M0 400v300q0 165 117.5 282.5t282.5 117.5h261l2 -80q-133 -32 -218 -120h-145q-41 0 -70.5 -29.5t-29.5 -70.5v-500q0 -41 29.5 -70.5t70.5 -29.5h500q41 0 70.5 29.5t29.5 70.5l200 153v-53q0 -165 -117.5 -282.5t-282.5 -117.5h-300q-165 0 -282.5 117.5t-117.5 282.5 zM423 524q30 38 81.5 64t103 35.5t99 14t77.5 3.5l29 -1v-209l360 324l-359 318v-216q-7 0 -19 -1t-48 -8t-69.5 -18.5t-76.5 -37t-76.5 -59t-62 -88t-39.5 -121.5z" />
|
||||
<glyph unicode="" d="M0 400v300q0 165 117.5 282.5t282.5 117.5h300q61 0 127 -23l-178 -177h-349q-41 0 -70.5 -29.5t-29.5 -70.5v-500q0 -41 29.5 -70.5t70.5 -29.5h500q41 0 70.5 29.5t29.5 70.5v69l200 200v-169q0 -165 -117.5 -282.5t-282.5 -117.5h-300q-165 0 -282.5 117.5 t-117.5 282.5zM342 632l283 -284l567 567l-137 137l-430 -431l-146 147z" />
|
||||
<glyph unicode="" d="M0 603l300 296v-198h200v200h-200l300 300l295 -300h-195v-200h200v198l300 -296l-300 -300v198h-200v-200h195l-295 -300l-300 300h200v200h-200v-198z" />
|
||||
<glyph unicode="" d="M200 50v1000q0 21 14.5 35.5t35.5 14.5h100q21 0 35.5 -14.5t14.5 -35.5v-437l500 487v-1100l-500 488v-438q0 -21 -14.5 -35.5t-35.5 -14.5h-100q-21 0 -35.5 14.5t-14.5 35.5z" />
|
||||
<glyph unicode="" d="M0 50v1000q0 21 14.5 35.5t35.5 14.5h100q21 0 35.5 -14.5t14.5 -35.5v-437l500 487v-487l500 487v-1100l-500 488v-488l-500 488v-438q0 -21 -14.5 -35.5t-35.5 -14.5h-100q-21 0 -35.5 14.5t-14.5 35.5z" />
|
||||
<glyph unicode="" d="M136 550l564 550v-487l500 487v-1100l-500 488v-488z" />
|
||||
<glyph unicode="" d="M200 0l900 550l-900 550v-1100z" />
|
||||
<glyph unicode="" d="M200 150q0 -21 14.5 -35.5t35.5 -14.5h200q21 0 35.5 14.5t14.5 35.5v800q0 21 -14.5 35.5t-35.5 14.5h-200q-21 0 -35.5 -14.5t-14.5 -35.5v-800zM600 150q0 -21 14.5 -35.5t35.5 -14.5h200q21 0 35.5 14.5t14.5 35.5v800q0 21 -14.5 35.5t-35.5 14.5h-200 q-21 0 -35.5 -14.5t-14.5 -35.5v-800z" />
|
||||
<glyph unicode="" d="M200 150q0 -20 14.5 -35t35.5 -15h800q21 0 35.5 15t14.5 35v800q0 21 -14.5 35.5t-35.5 14.5h-800q-21 0 -35.5 -14.5t-14.5 -35.5v-800z" />
|
||||
<glyph unicode="" d="M0 0v1100l500 -487v487l564 -550l-564 -550v488z" />
|
||||
<glyph unicode="" d="M0 0v1100l500 -487v487l500 -487v437q0 21 14.5 35.5t35.5 14.5h100q21 0 35.5 -14.5t14.5 -35.5v-1000q0 -21 -14.5 -35.5t-35.5 -14.5h-100q-21 0 -35.5 14.5t-14.5 35.5v438l-500 -488v488z" />
|
||||
<glyph unicode="" d="M300 0v1100l500 -487v437q0 21 14.5 35.5t35.5 14.5h100q21 0 35.5 -14.5t14.5 -35.5v-1000q0 -21 -14.5 -35.5t-35.5 -14.5h-100q-21 0 -35.5 14.5t-14.5 35.5v438z" />
|
||||
<glyph unicode="" d="M100 250v100q0 21 14.5 35.5t35.5 14.5h1000q21 0 35.5 -14.5t14.5 -35.5v-100q0 -21 -14.5 -35.5t-35.5 -14.5h-1000q-21 0 -35.5 14.5t-14.5 35.5zM100 500h1100l-550 564z" />
|
||||
<glyph unicode="" d="M185 599l592 -592l240 240l-353 353l353 353l-240 240z" />
|
||||
<glyph unicode="" d="M272 194l353 353l-353 353l241 240l572 -571l21 -22l-1 -1v-1l-592 -591z" />
|
||||
<glyph unicode="" d="M3 600q0 162 80 299.5t217.5 217.5t299.5 80t299.5 -80t217.5 -217.5t80 -299.5t-80 -299.5t-217.5 -217.5t-299.5 -80t-299.5 80t-217.5 217.5t-80 299.5zM300 500h200v-200h200v200h200v200h-200v200h-200v-200h-200v-200z" />
|
||||
<glyph unicode="" d="M3 600q0 162 80 299.5t217.5 217.5t299.5 80t299.5 -80t217.5 -217.5t80 -299.5t-80 -299.5t-217.5 -217.5t-299.5 -80t-299.5 80t-217.5 217.5t-80 299.5zM300 500h600v200h-600v-200z" />
|
||||
<glyph unicode="" d="M3 600q0 162 80 299.5t217.5 217.5t299.5 80t299.5 -80t217.5 -217.5t80 -299.5t-80 -299.5t-217.5 -217.5t-299.5 -80t-299.5 80t-217.5 217.5t-80 299.5zM246 459l213 -213l141 142l141 -142l213 213l-142 141l142 141l-213 212l-141 -141l-141 142l-212 -213l141 -141 z" />
|
||||
<glyph unicode="" d="M3 600q0 162 80 299.5t217.5 217.5t299.5 80t299.5 -80t217.5 -217.5t80 -299.5t-80 -299.5t-217.5 -217.5t-299.5 -80t-299.5 80t-217.5 217.5t-80 299.5zM270 551l276 -277l411 411l-175 174l-236 -236l-102 102z" />
|
||||
<glyph unicode="" d="M3 600q0 162 80 299.5t217.5 217.5t299.5 80t299.5 -80t217.5 -217.5t80 -299.5t-80 -299.5t-217.5 -217.5t-299.5 -80t-299.5 80t-217.5 217.5t-80 299.5zM364 700h143q4 0 11.5 -1t11 -1t6.5 3t3 9t1 11t3.5 8.5t3.5 6t5.5 4t6.5 2.5t9 1.5t9 0.5h11.5h12.5 q19 0 30 -10t11 -26q0 -22 -4 -28t-27 -22q-5 -1 -12.5 -3t-27 -13.5t-34 -27t-26.5 -46t-11 -68.5h200q5 3 14 8t31.5 25.5t39.5 45.5t31 69t14 94q0 51 -17.5 89t-42 58t-58.5 32t-58.5 15t-51.5 3q-50 0 -90.5 -12t-75 -38.5t-53.5 -74.5t-19 -114zM500 300h200v100h-200 v-100z" />
|
||||
<glyph unicode="" d="M3 600q0 162 80 299.5t217.5 217.5t299.5 80t299.5 -80t217.5 -217.5t80 -299.5t-80 -299.5t-217.5 -217.5t-299.5 -80t-299.5 80t-217.5 217.5t-80 299.5zM400 300h400v100h-100v300h-300v-100h100v-200h-100v-100zM500 800h200v100h-200v-100z" />
|
||||
<glyph unicode="" d="M0 500v200h195q31 125 98.5 199.5t206.5 100.5v200h200v-200q54 -20 113 -60t112.5 -105.5t71.5 -134.5h203v-200h-203q-25 -102 -116.5 -186t-180.5 -117v-197h-200v197q-140 27 -208 102.5t-98 200.5h-194zM290 500q24 -73 79.5 -127.5t130.5 -78.5v206h200v-206 q149 48 201 206h-201v200h200q-25 74 -75.5 127t-124.5 77v-204h-200v203q-75 -23 -130 -77t-79 -126h209v-200h-210z" />
|
||||
<glyph unicode="" d="M4 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM186 600q0 -171 121.5 -292.5t292.5 -121.5t292.5 121.5t121.5 292.5t-121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM356 465l135 135 l-135 135l109 109l135 -135l135 135l109 -109l-135 -135l135 -135l-109 -109l-135 135l-135 -135z" />
|
||||
<glyph unicode="" d="M4 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM186 600q0 -171 121.5 -292.5t292.5 -121.5t292.5 121.5t121.5 292.5t-121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM322 537l141 141 l87 -87l204 205l142 -142l-346 -345z" />
|
||||
<glyph unicode="" d="M4 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM186 600q0 -115 62 -215l568 567q-100 62 -216 62q-171 0 -292.5 -121.5t-121.5 -292.5zM391 245q97 -59 209 -59q171 0 292.5 121.5t121.5 292.5 q0 112 -59 209z" />
|
||||
<glyph unicode="" d="M0 547l600 453v-300h600v-300h-600v-301z" />
|
||||
<glyph unicode="" d="M0 400v300h600v300l600 -453l-600 -448v301h-600z" />
|
||||
<glyph unicode="" d="M204 600l450 600l444 -600h-298v-600h-300v600h-296z" />
|
||||
<glyph unicode="" d="M104 600h296v600h300v-600h298l-449 -600z" />
|
||||
<glyph unicode="" d="M0 200q6 132 41 238.5t103.5 193t184 138t271.5 59.5v271l600 -453l-600 -448v301q-95 -2 -183 -20t-170 -52t-147 -92.5t-100 -135.5z" />
|
||||
<glyph unicode="" d="M0 0v400l129 -129l294 294l142 -142l-294 -294l129 -129h-400zM635 777l142 -142l294 294l129 -129v400h-400l129 -129z" />
|
||||
<glyph unicode="" d="M34 176l295 295l-129 129h400v-400l-129 130l-295 -295zM600 600v400l129 -129l295 295l142 -141l-295 -295l129 -130h-400z" />
|
||||
<glyph unicode="" d="M23 600q0 118 45.5 224.5t123 184t184 123t224.5 45.5t224.5 -45.5t184 -123t123 -184t45.5 -224.5t-45.5 -224.5t-123 -184t-184 -123t-224.5 -45.5t-224.5 45.5t-184 123t-123 184t-45.5 224.5zM456 851l58 -302q4 -20 21.5 -34.5t37.5 -14.5h54q20 0 37.5 14.5 t21.5 34.5l58 302q4 20 -8 34.5t-32 14.5h-207q-21 0 -33 -14.5t-8 -34.5zM500 300h200v100h-200v-100z" />
|
||||
<glyph unicode="" d="M0 800h100v-200h400v300h200v-300h400v200h100v100h-111q1 1 1 6.5t-1.5 15t-3.5 17.5l-34 172q-11 39 -41.5 63t-69.5 24q-32 0 -61 -17l-239 -144q-22 -13 -40 -35q-19 24 -40 36l-238 144q-33 18 -62 18q-39 0 -69.5 -23t-40.5 -61l-35 -177q-2 -8 -3 -18t-1 -15v-6 h-111v-100zM100 0h400v400h-400v-400zM200 900q-3 0 14 48t36 96l18 47l213 -191h-281zM700 0v400h400v-400h-400zM731 900l202 197q5 -12 12 -32.5t23 -64t25 -72t7 -28.5h-269z" />
|
||||
<glyph unicode="" d="M0 -22v143l216 193q-9 53 -13 83t-5.5 94t9 113t38.5 114t74 124q47 60 99.5 102.5t103 68t127.5 48t145.5 37.5t184.5 43.5t220 58.5q0 -189 -22 -343t-59 -258t-89 -181.5t-108.5 -120t-122 -68t-125.5 -30t-121.5 -1.5t-107.5 12.5t-87.5 17t-56.5 7.5l-99 -55z M238.5 300.5q19.5 -6.5 86.5 76.5q55 66 367 234q70 38 118.5 69.5t102 79t99 111.5t86.5 148q22 50 24 60t-6 19q-7 5 -17 5t-26.5 -14.5t-33.5 -39.5q-35 -51 -113.5 -108.5t-139.5 -89.5l-61 -32q-369 -197 -458 -401q-48 -111 -28.5 -117.5z" />
|
||||
<glyph unicode="" d="M111 408q0 -33 5 -63q9 -56 44 -119.5t105 -108.5q31 -21 64 -16t62 23.5t57 49.5t48 61.5t35 60.5q32 66 39 184.5t-13 157.5q79 -80 122 -164t26 -184q-5 -33 -20.5 -69.5t-37.5 -80.5q-10 -19 -14.5 -29t-12 -26t-9 -23.5t-3 -19t2.5 -15.5t11 -9.5t19.5 -5t30.5 2.5 t42 8q57 20 91 34t87.5 44.5t87 64t65.5 88.5t47 122q38 172 -44.5 341.5t-246.5 278.5q22 -44 43 -129q39 -159 -32 -154q-15 2 -33 9q-79 33 -120.5 100t-44 175.5t48.5 257.5q-13 -8 -34 -23.5t-72.5 -66.5t-88.5 -105.5t-60 -138t-8 -166.5q2 -12 8 -41.5t8 -43t6 -39.5 t3.5 -39.5t-1 -33.5t-6 -31.5t-13.5 -24t-21 -20.5t-31 -12q-38 -10 -67 13t-40.5 61.5t-15 81.5t10.5 75q-52 -46 -83.5 -101t-39 -107t-7.5 -85z" />
|
||||
<glyph unicode="" d="M-61 600l26 40q6 10 20 30t49 63.5t74.5 85.5t97 90t116.5 83.5t132.5 59t145.5 23.5t145.5 -23.5t132.5 -59t116.5 -83.5t97 -90t74.5 -85.5t49 -63.5t20 -30l26 -40l-26 -40q-6 -10 -20 -30t-49 -63.5t-74.5 -85.5t-97 -90t-116.5 -83.5t-132.5 -59t-145.5 -23.5 t-145.5 23.5t-132.5 59t-116.5 83.5t-97 90t-74.5 85.5t-49 63.5t-20 30zM120 600q7 -10 40.5 -58t56 -78.5t68 -77.5t87.5 -75t103 -49.5t125 -21.5t123.5 20t100.5 45.5t85.5 71.5t66.5 75.5t58 81.5t47 66q-1 1 -28.5 37.5t-42 55t-43.5 53t-57.5 63.5t-58.5 54 q49 -74 49 -163q0 -124 -88 -212t-212 -88t-212 88t-88 212q0 85 46 158q-102 -87 -226 -258zM377 656q49 -124 154 -191l105 105q-37 24 -75 72t-57 84l-20 36z" />
|
||||
<glyph unicode="" d="M-61 600l26 40q6 10 20 30t49 63.5t74.5 85.5t97 90t116.5 83.5t132.5 59t145.5 23.5q61 0 121 -17l37 142h148l-314 -1200h-148l37 143q-82 21 -165 71.5t-140 102t-109.5 112t-72 88.5t-29.5 43zM120 600q210 -282 393 -336l37 141q-107 18 -178.5 101.5t-71.5 193.5 q0 85 46 158q-102 -87 -226 -258zM377 656q49 -124 154 -191l47 47l23 87q-30 28 -59 69t-44 68l-14 26zM780 161l38 145q22 15 44.5 34t46 44t40.5 44t41 50.5t33.5 43.5t33 44t24.5 34q-97 127 -140 175l39 146q67 -54 131.5 -125.5t87.5 -103.5t36 -52l26 -40l-26 -40 q-7 -12 -25.5 -38t-63.5 -79.5t-95.5 -102.5t-124 -100t-146.5 -79z" />
|
||||
<glyph unicode="" d="M-97.5 34q13.5 -34 50.5 -34h1294q37 0 50.5 35.5t-7.5 67.5l-642 1056q-20 34 -48 36.5t-48 -29.5l-642 -1066q-21 -32 -7.5 -66zM155 200l445 723l445 -723h-345v100h-200v-100h-345zM500 600l100 -300l100 300v100h-200v-100z" />
|
||||
<glyph unicode="" d="M100 262v41q0 20 11 44.5t26 38.5l363 325v339q0 62 44 106t106 44t106 -44t44 -106v-339l363 -325q15 -14 26 -38.5t11 -44.5v-41q0 -20 -12 -26.5t-29 5.5l-359 249v-263q100 -91 100 -113v-64q0 -20 -13 -28.5t-32 0.5l-94 78h-222l-94 -78q-19 -9 -32 -0.5t-13 28.5 v64q0 22 100 113v263l-359 -249q-17 -12 -29 -5.5t-12 26.5z" />
|
||||
<glyph unicode="" d="M0 50q0 -20 14.5 -35t35.5 -15h1000q21 0 35.5 15t14.5 35v750h-1100v-750zM0 900h1100v150q0 21 -14.5 35.5t-35.5 14.5h-150v100h-100v-100h-500v100h-100v-100h-150q-21 0 -35.5 -14.5t-14.5 -35.5v-150zM100 100v100h100v-100h-100zM100 300v100h100v-100h-100z M100 500v100h100v-100h-100zM300 100v100h100v-100h-100zM300 300v100h100v-100h-100zM300 500v100h100v-100h-100zM500 100v100h100v-100h-100zM500 300v100h100v-100h-100zM500 500v100h100v-100h-100zM700 100v100h100v-100h-100zM700 300v100h100v-100h-100zM700 500 v100h100v-100h-100zM900 100v100h100v-100h-100zM900 300v100h100v-100h-100zM900 500v100h100v-100h-100z" />
|
||||
<glyph unicode="" d="M0 200v200h259l600 600h241v198l300 -295l-300 -300v197h-159l-600 -600h-341zM0 800h259l122 -122l141 142l-181 180h-341v-200zM678 381l141 142l122 -123h159v198l300 -295l-300 -300v197h-241z" />
|
||||
<glyph unicode="" d="M0 400v600q0 41 29.5 70.5t70.5 29.5h1000q41 0 70.5 -29.5t29.5 -70.5v-600q0 -41 -29.5 -70.5t-70.5 -29.5h-596l-304 -300v300h-100q-41 0 -70.5 29.5t-29.5 70.5z" />
|
||||
<glyph unicode="" d="M100 600v200h300v-250q0 -113 6 -145q17 -92 102 -117q39 -11 92 -11q37 0 66.5 5.5t50 15.5t36 24t24 31.5t14 37.5t7 42t2.5 45t0 47v25v250h300v-200q0 -42 -3 -83t-15 -104t-31.5 -116t-58 -109.5t-89 -96.5t-129 -65.5t-174.5 -25.5t-174.5 25.5t-129 65.5t-89 96.5 t-58 109.5t-31.5 116t-15 104t-3 83zM100 900v300h300v-300h-300zM800 900v300h300v-300h-300z" />
|
||||
<glyph unicode="" d="M-30 411l227 -227l352 353l353 -353l226 227l-578 579z" />
|
||||
<glyph unicode="" d="M70 797l580 -579l578 579l-226 227l-353 -353l-352 353z" />
|
||||
<glyph unicode="" d="M-198 700l299 283l300 -283h-203v-400h385l215 -200h-800v600h-196zM402 1000l215 -200h381v-400h-198l299 -283l299 283h-200v600h-796z" />
|
||||
<glyph unicode="" d="M18 939q-5 24 10 42q14 19 39 19h896l38 162q5 17 18.5 27.5t30.5 10.5h94q20 0 35 -14.5t15 -35.5t-15 -35.5t-35 -14.5h-54l-201 -961q-2 -4 -6 -10.5t-19 -17.5t-33 -11h-31v-50q0 -20 -14.5 -35t-35.5 -15t-35.5 15t-14.5 35v50h-300v-50q0 -20 -14.5 -35t-35.5 -15 t-35.5 15t-14.5 35v50h-50q-21 0 -35.5 15t-14.5 35q0 21 14.5 35.5t35.5 14.5h535l48 200h-633q-32 0 -54.5 21t-27.5 43z" />
|
||||
<glyph unicode="" d="M0 0v800h1200v-800h-1200zM0 900v100h200q0 41 29.5 70.5t70.5 29.5h300q41 0 70.5 -29.5t29.5 -70.5h500v-100h-1200z" />
|
||||
<glyph unicode="" d="M1 0l300 700h1200l-300 -700h-1200zM1 400v600h200q0 41 29.5 70.5t70.5 29.5h300q41 0 70.5 -29.5t29.5 -70.5h500v-200h-1000z" />
|
||||
<glyph unicode="" d="M302 300h198v600h-198l298 300l298 -300h-198v-600h198l-298 -300z" />
|
||||
<glyph unicode="" d="M0 600l300 298v-198h600v198l300 -298l-300 -297v197h-600v-197z" />
|
||||
<glyph unicode="" d="M0 100v100q0 41 29.5 70.5t70.5 29.5h1000q41 0 70.5 -29.5t29.5 -70.5v-100q0 -41 -29.5 -70.5t-70.5 -29.5h-1000q-41 0 -70.5 29.5t-29.5 70.5zM31 400l172 739q5 22 23 41.5t38 19.5h672q19 0 37.5 -22.5t23.5 -45.5l172 -732h-1138zM800 100h100v100h-100v-100z M1000 100h100v100h-100v-100z" />
|
||||
<glyph unicode="" d="M-101 600v50q0 24 25 49t50 38l25 13v-250l-11 5.5t-24 14t-30 21.5t-24 27.5t-11 31.5zM100 500v250v8v8v7t0.5 7t1.5 5.5t2 5t3 4t4.5 3.5t6 1.5t7.5 0.5h200l675 250v-850l-675 200h-38l47 -276q2 -12 -3 -17.5t-11 -6t-21 -0.5h-8h-83q-20 0 -34.5 14t-18.5 35 q-55 337 -55 351zM1100 200v850q0 21 14.5 35.5t35.5 14.5q20 0 35 -14.5t15 -35.5v-850q0 -20 -15 -35t-35 -15q-21 0 -35.5 15t-14.5 35z" />
|
||||
<glyph unicode="" d="M74 350q0 21 13.5 35.5t33.5 14.5h18l117 173l63 327q15 77 76 140t144 83l-18 32q-6 19 3 32t29 13h94q20 0 29 -10.5t3 -29.5q-18 -36 -18 -37q83 -19 144 -82.5t76 -140.5l63 -327l118 -173h17q20 0 33.5 -14.5t13.5 -35.5q0 -20 -13 -40t-31 -27q-8 -3 -23 -8.5 t-65 -20t-103 -25t-132.5 -19.5t-158.5 -9q-125 0 -245.5 20.5t-178.5 40.5l-58 20q-18 7 -31 27.5t-13 40.5zM497 110q12 -49 40 -79.5t63 -30.5t63 30.5t39 79.5q-48 -6 -102 -6t-103 6z" />
|
||||
<glyph unicode="" d="M21 445l233 -45l-78 -224l224 78l45 -233l155 179l155 -179l45 233l224 -78l-78 224l234 45l-180 155l180 156l-234 44l78 225l-224 -78l-45 233l-155 -180l-155 180l-45 -233l-224 78l78 -225l-233 -44l179 -156z" />
|
||||
<glyph unicode="" d="M0 200h200v600h-200v-600zM300 275q0 -75 100 -75h61q124 -100 139 -100h250q46 0 83 57l238 344q29 31 29 74v100q0 44 -30.5 84.5t-69.5 40.5h-328q28 118 28 125v150q0 44 -30.5 84.5t-69.5 40.5h-50q-27 0 -51 -20t-38 -48l-96 -198l-145 -196q-20 -26 -20 -63v-400z M400 300v375l150 213l100 212h50v-175l-50 -225h450v-125l-250 -375h-214l-136 100h-100z" />
|
||||
<glyph unicode="" d="M0 400v600h200v-600h-200zM300 525v400q0 75 100 75h61q124 100 139 100h250q46 0 83 -57l238 -344q29 -31 29 -74v-100q0 -44 -30.5 -84.5t-69.5 -40.5h-328q28 -118 28 -125v-150q0 -44 -30.5 -84.5t-69.5 -40.5h-50q-27 0 -51 20t-38 48l-96 198l-145 196 q-20 26 -20 63zM400 525l150 -212l100 -213h50v175l-50 225h450v125l-250 375h-214l-136 -100h-100v-375z" />
|
||||
<glyph unicode="" d="M8 200v600h200v-600h-200zM308 275v525q0 17 14 35.5t28 28.5l14 9l362 230q14 6 25 6q17 0 29 -12l109 -112q14 -14 14 -34q0 -18 -11 -32l-85 -121h302q85 0 138.5 -38t53.5 -110t-54.5 -111t-138.5 -39h-107l-130 -339q-7 -22 -20.5 -41.5t-28.5 -19.5h-341 q-7 0 -90 81t-83 94zM408 289l100 -89h293l131 339q6 21 19.5 41t28.5 20h203q16 0 25 15t9 36q0 20 -9 34.5t-25 14.5h-457h-6.5h-7.5t-6.5 0.5t-6 1t-5 1.5t-5.5 2.5t-4 4t-4 5.5q-5 12 -5 20q0 14 10 27l147 183l-86 83l-339 -236v-503z" />
|
||||
<glyph unicode="" d="M-101 651q0 72 54 110t139 38l302 -1l-85 121q-11 16 -11 32q0 21 14 34l109 113q13 12 29 12q11 0 25 -6l365 -230q7 -4 17 -10.5t26.5 -26t16.5 -36.5v-526q0 -13 -86 -93.5t-94 -80.5h-341q-16 0 -29.5 20t-19.5 41l-130 339h-107q-84 0 -139 39t-55 111zM-1 601h222 q15 0 28.5 -20.5t19.5 -40.5l131 -339h293l107 89v502l-343 237l-87 -83l145 -184q10 -11 10 -26q0 -11 -5 -20q-1 -3 -3.5 -5.5l-4 -4t-5 -2.5t-5.5 -1.5t-6.5 -1t-6.5 -0.5h-7.5h-6.5h-476v-100zM1000 201v600h200v-600h-200z" />
|
||||
<glyph unicode="" d="M97 719l230 -363q4 -6 10.5 -15.5t26 -25t36.5 -15.5h525q13 0 94 83t81 90v342q0 15 -20 28.5t-41 19.5l-339 131v106q0 84 -39 139t-111 55t-110 -53.5t-38 -138.5v-302l-121 84q-15 12 -33.5 11.5t-32.5 -13.5l-112 -110q-22 -22 -6 -53zM172 739l83 86l183 -146 q22 -18 47 -5q3 1 5.5 3.5l4 4t2.5 5t1.5 5.5t1 6.5t0.5 6.5v7.5v6.5v456q0 22 25 31t50 -0.5t25 -30.5v-202q0 -16 20 -29.5t41 -19.5l339 -130v-294l-89 -100h-503zM400 0v200h600v-200h-600z" />
|
||||
<glyph unicode="" d="M2 585q-16 -31 6 -53l112 -110q13 -13 32 -13.5t34 10.5l121 85q0 -51 -0.5 -153.5t-0.5 -148.5q0 -84 38.5 -138t110.5 -54t111 55t39 139v106l339 131q20 6 40.5 19.5t20.5 28.5v342q0 7 -81 90t-94 83h-525q-17 0 -35.5 -14t-28.5 -28l-10 -15zM77 565l236 339h503 l89 -100v-294l-340 -130q-20 -6 -40 -20t-20 -29v-202q0 -22 -25 -31t-50 0t-25 31v456v14.5t-1.5 11.5t-5 12t-9.5 7q-24 13 -46 -5l-184 -146zM305 1104v200h600v-200h-600z" />
|
||||
<glyph unicode="" d="M5 597q0 122 47.5 232.5t127.5 190.5t190.5 127.5t232.5 47.5q162 0 299.5 -80t217.5 -218t80 -300t-80 -299.5t-217.5 -217.5t-299.5 -80t-300 80t-218 217.5t-80 299.5zM298 701l2 -201h300l-2 -194l402 294l-402 298v-197h-300z" />
|
||||
<glyph unicode="" d="M0 597q0 122 47.5 232.5t127.5 190.5t190.5 127.5t231.5 47.5q122 0 232.5 -47.5t190.5 -127.5t127.5 -190.5t47.5 -232.5q0 -162 -80 -299.5t-218 -217.5t-300 -80t-299.5 80t-217.5 217.5t-80 299.5zM200 600l402 -294l-2 194h300l2 201h-300v197z" />
|
||||
<glyph unicode="" d="M5 597q0 122 47.5 232.5t127.5 190.5t190.5 127.5t232.5 47.5q162 0 299.5 -80t217.5 -218t80 -300t-80 -299.5t-217.5 -217.5t-299.5 -80t-300 80t-218 217.5t-80 299.5zM300 600h200v-300h200v300h200l-300 400z" />
|
||||
<glyph unicode="" d="M5 597q0 122 47.5 232.5t127.5 190.5t190.5 127.5t232.5 47.5q162 0 299.5 -80t217.5 -218t80 -300t-80 -299.5t-217.5 -217.5t-299.5 -80t-300 80t-218 217.5t-80 299.5zM300 600l300 -400l300 400h-200v300h-200v-300h-200z" />
|
||||
<glyph unicode="" d="M5 597q0 122 47.5 232.5t127.5 190.5t190.5 127.5t232.5 47.5q121 0 231.5 -47.5t190.5 -127.5t127.5 -190.5t47.5 -232.5q0 -162 -80 -299.5t-217.5 -217.5t-299.5 -80t-300 80t-218 217.5t-80 299.5zM254 780q-8 -33 5.5 -92.5t7.5 -87.5q0 -9 17 -44t16 -60 q12 0 23 -5.5t23 -15t20 -13.5q24 -12 108 -42q22 -8 53 -31.5t59.5 -38.5t57.5 -11q8 -18 -15 -55t-20 -57q42 -71 87 -80q0 -6 -3 -15.5t-3.5 -14.5t4.5 -17q104 -3 221 112q30 29 47 47t34.5 49t20.5 62q-14 9 -37 9.5t-36 7.5q-14 7 -49 15t-52 19q-9 0 -39.5 -0.5 t-46.5 -1.5t-39 -6.5t-39 -16.5q-50 -35 -66 -12q-4 2 -3.5 25.5t0.5 25.5q-6 13 -26.5 17t-24.5 7q2 22 -2 41t-16.5 28t-38.5 -20q-23 -25 -42 4q-19 28 -8 58q6 16 22 22q6 -1 26 -1.5t33.5 -4t19.5 -13.5q12 -19 32 -37.5t34 -27.5l14 -8q0 3 9.5 39.5t5.5 57.5 q-4 23 14.5 44.5t22.5 31.5q5 14 10 35t8.5 31t15.5 22.5t34 21.5q-6 18 10 37q8 0 23.5 -1.5t24.5 -1.5t20.5 4.5t20.5 15.5q-10 23 -30.5 42.5t-38 30t-49 26.5t-43.5 23q11 39 2 44q31 -13 58 -14.5t39 3.5l11 4q7 36 -16.5 53.5t-64.5 28.5t-56 23q-19 -3 -37 0 q-15 -12 -36.5 -21t-34.5 -12t-44 -8t-39 -6q-15 -3 -45.5 0.5t-45.5 -2.5q-21 -7 -52 -26.5t-34 -34.5q-3 -11 6.5 -22.5t8.5 -18.5q-3 -34 -27.5 -90.5t-29.5 -79.5zM518 916q3 12 16 30t16 25q10 -10 18.5 -10t14 6t14.5 14.5t16 12.5q0 -24 17 -66.5t17 -43.5 q-9 2 -31 5t-36 5t-32 8t-30 14zM692 1003h1h-1z" />
|
||||
<glyph unicode="" d="M0 164.5q0 21.5 15 37.5l600 599q-33 101 6 201.5t135 154.5q164 92 306 -9l-259 -138l145 -232l251 126q13 -175 -151 -267q-123 -70 -253 -23l-596 -596q-15 -16 -36.5 -16t-36.5 16l-111 110q-15 15 -15 36.5z" />
|
||||
<glyph unicode="" horiz-adv-x="1220" d="M0 196v100q0 41 29.5 70.5t70.5 29.5h1000q41 0 70.5 -29.5t29.5 -70.5v-100q0 -41 -29.5 -70.5t-70.5 -29.5h-1000q-41 0 -70.5 29.5t-29.5 70.5zM0 596v100q0 41 29.5 70.5t70.5 29.5h1000q41 0 70.5 -29.5t29.5 -70.5v-100q0 -41 -29.5 -70.5t-70.5 -29.5h-1000 q-41 0 -70.5 29.5t-29.5 70.5zM0 996v100q0 41 29.5 70.5t70.5 29.5h1000q41 0 70.5 -29.5t29.5 -70.5v-100q0 -41 -29.5 -70.5t-70.5 -29.5h-1000q-41 0 -70.5 29.5t-29.5 70.5zM600 596h500v100h-500v-100zM800 196h300v100h-300v-100zM900 996h200v100h-200v-100z" />
|
||||
<glyph unicode="" d="M100 1100v100h1000v-100h-1000zM150 1000h900l-350 -500v-300l-200 -200v500z" />
|
||||
<glyph unicode="" d="M0 200v200h1200v-200q0 -41 -29.5 -70.5t-70.5 -29.5h-1000q-41 0 -70.5 29.5t-29.5 70.5zM0 500v400q0 41 29.5 70.5t70.5 29.5h300v100q0 41 29.5 70.5t70.5 29.5h200q41 0 70.5 -29.5t29.5 -70.5v-100h300q41 0 70.5 -29.5t29.5 -70.5v-400h-500v100h-200v-100h-500z M500 1000h200v100h-200v-100z" />
|
||||
<glyph unicode="" d="M0 0v400l129 -129l200 200l142 -142l-200 -200l129 -129h-400zM0 800l129 129l200 -200l142 142l-200 200l129 129h-400v-400zM729 329l142 142l200 -200l129 129v-400h-400l129 129zM729 871l200 200l-129 129h400v-400l-129 129l-200 -200z" />
|
||||
<glyph unicode="" d="M0 596q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM182 596q0 -172 121.5 -293t292.5 -121t292.5 121t121.5 293q0 171 -121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM291 655 q0 23 15.5 38.5t38.5 15.5t39 -16t16 -38q0 -23 -16 -39t-39 -16q-22 0 -38 16t-16 39zM400 850q0 22 16 38.5t39 16.5q22 0 38 -16t16 -39t-16 -39t-38 -16q-23 0 -39 16.5t-16 38.5zM514 609q0 32 20.5 56.5t51.5 29.5l122 126l1 1q-9 14 -9 28q0 22 16 38.5t39 16.5 q22 0 38 -16t16 -39t-16 -39t-38 -16q-14 0 -29 10l-55 -145q17 -22 17 -51q0 -36 -25.5 -61.5t-61.5 -25.5t-61.5 25.5t-25.5 61.5zM800 655q0 22 16 38t39 16t38.5 -15.5t15.5 -38.5t-16 -39t-38 -16q-23 0 -39 16t-16 39z" />
|
||||
<glyph unicode="" d="M-40 375q-13 -95 35 -173q35 -57 94 -89t129 -32q63 0 119 28q33 16 65 40.5t52.5 45.5t59.5 64q40 44 57 61l394 394q35 35 47 84t-3 96q-27 87 -117 104q-20 2 -29 2q-46 0 -78.5 -16.5t-67.5 -51.5l-389 -396l-7 -7l69 -67l377 373q20 22 39 38q23 23 50 23 q38 0 53 -36q16 -39 -20 -75l-547 -547q-52 -52 -125 -52q-55 0 -100 33t-54 96q-5 35 2.5 66t31.5 63t42 50t56 54q24 21 44 41l348 348q52 52 82.5 79.5t84 54t107.5 26.5q25 0 48 -4q95 -17 154 -94.5t51 -175.5q-7 -101 -98 -192l-252 -249l-253 -256l7 -7l69 -60 l517 511q67 67 95 157t11 183q-16 87 -67 154t-130 103q-69 33 -152 33q-107 0 -197 -55q-40 -24 -111 -95l-512 -512q-68 -68 -81 -163z" />
|
||||
<glyph unicode="" d="M80 784q0 131 98.5 229.5t230.5 98.5q143 0 241 -129q103 129 246 129q129 0 226 -98.5t97 -229.5q0 -46 -17.5 -91t-61 -99t-77 -89.5t-104.5 -105.5q-197 -191 -293 -322l-17 -23l-16 23q-43 58 -100 122.5t-92 99.5t-101 100q-71 70 -104.5 105.5t-77 89.5t-61 99 t-17.5 91zM250 784q0 -27 30.5 -70t61.5 -75.5t95 -94.5l22 -22q93 -90 190 -201q82 92 195 203l12 12q64 62 97.5 97t64.5 79t31 72q0 71 -48 119.5t-105 48.5q-74 0 -132 -83l-118 -171l-114 174q-51 80 -123 80q-60 0 -109.5 -49.5t-49.5 -118.5z" />
|
||||
<glyph unicode="" d="M57 353q0 -95 66 -159l141 -142q68 -66 159 -66q93 0 159 66l283 283q66 66 66 159t-66 159l-141 141q-8 9 -19 17l-105 -105l212 -212l-389 -389l-247 248l95 95l-18 18q-46 45 -75 101l-55 -55q-66 -66 -66 -159zM269 706q0 -93 66 -159l141 -141q7 -7 19 -17l105 105 l-212 212l389 389l247 -247l-95 -96l18 -17q47 -49 77 -100l29 29q35 35 62.5 88t27.5 96q0 93 -66 159l-141 141q-66 66 -159 66q-95 0 -159 -66l-283 -283q-66 -64 -66 -159z" />
|
||||
<glyph unicode="" d="M200 100v953q0 21 30 46t81 48t129 38t163 15t162 -15t127 -38t79 -48t29 -46v-953q0 -41 -29.5 -70.5t-70.5 -29.5h-600q-41 0 -70.5 29.5t-29.5 70.5zM300 300h600v700h-600v-700zM496 150q0 -43 30.5 -73.5t73.5 -30.5t73.5 30.5t30.5 73.5t-30.5 73.5t-73.5 30.5 t-73.5 -30.5t-30.5 -73.5z" />
|
||||
<glyph unicode="" d="M0 0l303 380l207 208l-210 212h300l267 279l-35 36q-15 14 -15 35t15 35q14 15 35 15t35 -15l283 -282q15 -15 15 -36t-15 -35q-14 -15 -35 -15t-35 15l-36 35l-279 -267v-300l-212 210l-208 -207z" />
|
||||
<glyph unicode="" d="M295 433h139q5 -77 48.5 -126.5t117.5 -64.5v335q-6 1 -15.5 4t-11.5 3q-46 14 -79 26.5t-72 36t-62.5 52t-40 72.5t-16.5 99q0 92 44 159.5t109 101t144 40.5v78h100v-79q38 -4 72.5 -13.5t75.5 -31.5t71 -53.5t51.5 -84t24.5 -118.5h-159q-8 72 -35 109.5t-101 50.5 v-307l64 -14q34 -7 64 -16.5t70 -31.5t67.5 -52t47.5 -80.5t20 -112.5q0 -139 -89 -224t-244 -96v-77h-100v78q-152 17 -237 104q-40 40 -52.5 93.5t-15.5 139.5zM466 889q0 -29 8 -51t16.5 -34t29.5 -22.5t31 -13.5t38 -10q7 -2 11 -3v274q-61 -8 -97.5 -37.5t-36.5 -102.5 zM700 237q170 18 170 151q0 64 -44 99.5t-126 60.5v-311z" />
|
||||
<glyph unicode="" d="M100 600v100h166q-24 49 -44 104q-10 26 -14.5 55.5t-3 72.5t25 90t68.5 87q97 88 263 88q129 0 230 -89t101 -208h-153q0 52 -34 89.5t-74 51.5t-76 14q-37 0 -79 -14.5t-62 -35.5q-41 -44 -41 -101q0 -28 16.5 -69.5t28 -62.5t41.5 -72h241v-100h-197q8 -50 -2.5 -115 t-31.5 -94q-41 -59 -99 -113q35 11 84 18t70 7q33 1 103 -16t103 -17q76 0 136 30l50 -147q-41 -25 -80.5 -36.5t-59 -13t-61.5 -1.5q-23 0 -128 33t-155 29q-39 -4 -82 -17t-66 -25l-24 -11l-55 145l16.5 11t15.5 10t13.5 9.5t14.5 12t14.5 14t17.5 18.5q48 55 54 126.5 t-30 142.5h-221z" />
|
||||
<glyph unicode="" d="M2 300l298 -300l298 300h-198v900h-200v-900h-198zM602 900l298 300l298 -300h-198v-900h-200v900h-198z" />
|
||||
<glyph unicode="" d="M2 300h198v900h200v-900h198l-298 -300zM700 0v200h100v-100h200v-100h-300zM700 400v100h300v-200h-99v-100h-100v100h99v100h-200zM700 700v500h300v-500h-100v100h-100v-100h-100zM801 900h100v200h-100v-200z" />
|
||||
<glyph unicode="" d="M2 300h198v900h200v-900h198l-298 -300zM700 0v500h300v-500h-100v100h-100v-100h-100zM700 700v200h100v-100h200v-100h-300zM700 1100v100h300v-200h-99v-100h-100v100h99v100h-200zM801 200h100v200h-100v-200z" />
|
||||
<glyph unicode="" d="M2 300l298 -300l298 300h-198v900h-200v-900h-198zM800 100v400h300v-500h-100v100h-200zM800 1100v100h200v-500h-100v400h-100zM901 200h100v200h-100v-200z" />
|
||||
<glyph unicode="" d="M2 300l298 -300l298 300h-198v900h-200v-900h-198zM800 400v100h200v-500h-100v400h-100zM800 800v400h300v-500h-100v100h-200zM901 900h100v200h-100v-200z" />
|
||||
<glyph unicode="" d="M2 300l298 -300l298 300h-198v900h-200v-900h-198zM700 100v200h500v-200h-500zM700 400v200h400v-200h-400zM700 700v200h300v-200h-300zM700 1000v200h200v-200h-200z" />
|
||||
<glyph unicode="" d="M2 300l298 -300l298 300h-198v900h-200v-900h-198zM700 100v200h200v-200h-200zM700 400v200h300v-200h-300zM700 700v200h400v-200h-400zM700 1000v200h500v-200h-500z" />
|
||||
<glyph unicode="" d="M0 400v300q0 165 117.5 282.5t282.5 117.5h300q162 0 281 -118.5t119 -281.5v-300q0 -165 -118.5 -282.5t-281.5 -117.5h-300q-165 0 -282.5 117.5t-117.5 282.5zM200 300q0 -41 29.5 -70.5t70.5 -29.5h500q41 0 70.5 29.5t29.5 70.5v500q0 41 -29.5 70.5t-70.5 29.5 h-500q-41 0 -70.5 -29.5t-29.5 -70.5v-500z" />
|
||||
<glyph unicode="" d="M0 400v300q0 163 119 281.5t281 118.5h300q165 0 282.5 -117.5t117.5 -282.5v-300q0 -165 -117.5 -282.5t-282.5 -117.5h-300q-163 0 -281.5 117.5t-118.5 282.5zM200 300q0 -41 29.5 -70.5t70.5 -29.5h500q41 0 70.5 29.5t29.5 70.5v500q0 41 -29.5 70.5t-70.5 29.5 h-500q-41 0 -70.5 -29.5t-29.5 -70.5v-500zM400 300l333 250l-333 250v-500z" />
|
||||
<glyph unicode="" d="M0 400v300q0 163 117.5 281.5t282.5 118.5h300q163 0 281.5 -119t118.5 -281v-300q0 -165 -117.5 -282.5t-282.5 -117.5h-300q-165 0 -282.5 117.5t-117.5 282.5zM200 300q0 -41 29.5 -70.5t70.5 -29.5h500q41 0 70.5 29.5t29.5 70.5v500q0 41 -29.5 70.5t-70.5 29.5 h-500q-41 0 -70.5 -29.5t-29.5 -70.5v-500zM300 700l250 -333l250 333h-500z" />
|
||||
<glyph unicode="" d="M0 400v300q0 165 117.5 282.5t282.5 117.5h300q165 0 282.5 -117.5t117.5 -282.5v-300q0 -162 -118.5 -281t-281.5 -119h-300q-165 0 -282.5 118.5t-117.5 281.5zM200 300q0 -41 29.5 -70.5t70.5 -29.5h500q41 0 70.5 29.5t29.5 70.5v500q0 41 -29.5 70.5t-70.5 29.5 h-500q-41 0 -70.5 -29.5t-29.5 -70.5v-500zM300 400h500l-250 333z" />
|
||||
<glyph unicode="" d="M0 400v300h300v200l400 -350l-400 -350v200h-300zM500 0v200h500q41 0 70.5 29.5t29.5 70.5v500q0 41 -29.5 70.5t-70.5 29.5h-500v200h400q165 0 282.5 -117.5t117.5 -282.5v-300q0 -165 -117.5 -282.5t-282.5 -117.5h-400z" />
|
||||
<glyph unicode="" d="M217 519q8 -19 31 -19h302q-155 -438 -160 -458q-5 -21 4 -32l9 -8h9q14 0 26 15q11 13 274.5 321.5t264.5 308.5q14 19 5 36q-8 17 -31 17l-301 -1q1 4 78 219.5t79 227.5q2 15 -5 27l-9 9h-9q-15 0 -25 -16q-4 -6 -98 -111.5t-228.5 -257t-209.5 -237.5q-16 -19 -6 -41 z" />
|
||||
<glyph unicode="" d="M0 400q0 -165 117.5 -282.5t282.5 -117.5h300q47 0 100 15v185h-500q-41 0 -70.5 29.5t-29.5 70.5v500q0 41 29.5 70.5t70.5 29.5h500v185q-14 4 -114 7.5t-193 5.5l-93 2q-165 0 -282.5 -117.5t-117.5 -282.5v-300zM600 400v300h300v200l400 -350l-400 -350v200h-300z " />
|
||||
<glyph unicode="" d="M0 400q0 -165 117.5 -282.5t282.5 -117.5h300q163 0 281.5 117.5t118.5 282.5v98l-78 73l-122 -123v-148q0 -41 -29.5 -70.5t-70.5 -29.5h-500q-41 0 -70.5 29.5t-29.5 70.5v500q0 41 29.5 70.5t70.5 29.5h156l118 122l-74 78h-100q-165 0 -282.5 -117.5t-117.5 -282.5 v-300zM496 709l353 342l-149 149h500v-500l-149 149l-342 -353z" />
|
||||
<glyph unicode="" d="M4 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM186 600q0 -171 121.5 -292.5t292.5 -121.5t292.5 121.5t121.5 292.5t-121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM406 600 q0 80 57 137t137 57t137 -57t57 -137t-57 -137t-137 -57t-137 57t-57 137z" />
|
||||
<glyph unicode="" d="M0 0v275q0 11 7 18t18 7h1048q11 0 19 -7.5t8 -17.5v-275h-1100zM100 800l445 -500l450 500h-295v400h-300v-400h-300zM900 150h100v50h-100v-50z" />
|
||||
<glyph unicode="" d="M0 0v275q0 11 7 18t18 7h1048q11 0 19 -7.5t8 -17.5v-275h-1100zM100 700h300v-300h300v300h295l-445 500zM900 150h100v50h-100v-50z" />
|
||||
<glyph unicode="" d="M0 0v275q0 11 7 18t18 7h1048q11 0 19 -7.5t8 -17.5v-275h-1100zM100 705l305 -305l596 596l-154 155l-442 -442l-150 151zM900 150h100v50h-100v-50z" />
|
||||
<glyph unicode="" d="M0 0v275q0 11 7 18t18 7h1048q11 0 19 -7.5t8 -17.5v-275h-1100zM100 988l97 -98l212 213l-97 97zM200 400l697 1l3 699l-250 -239l-149 149l-212 -212l149 -149zM900 150h100v50h-100v-50z" />
|
||||
<glyph unicode="" d="M0 0v275q0 11 7 18t18 7h1048q11 0 19 -7.5t8 -17.5v-275h-1100zM200 612l212 -212l98 97l-213 212zM300 1200l239 -250l-149 -149l212 -212l149 148l249 -237l-1 697zM900 150h100v50h-100v-50z" />
|
||||
<glyph unicode="" d="M23 415l1177 784v-1079l-475 272l-310 -393v416h-392zM494 210l672 938l-672 -712v-226z" />
|
||||
<glyph unicode="" d="M0 150v1000q0 20 14.5 35t35.5 15h250v-300h500v300h100l200 -200v-850q0 -21 -15 -35.5t-35 -14.5h-150v400h-700v-400h-150q-21 0 -35.5 14.5t-14.5 35.5zM600 1000h100v200h-100v-200z" />
|
||||
<glyph unicode="" d="M0 150v1000q0 20 14.5 35t35.5 15h250v-300h500v300h100l200 -200v-218l-276 -275l-120 120l-126 -127h-378v-400h-150q-21 0 -35.5 14.5t-14.5 35.5zM581 306l123 123l120 -120l353 352l123 -123l-475 -476zM600 1000h100v200h-100v-200z" />
|
||||
<glyph unicode="" d="M0 150v1000q0 20 14.5 35t35.5 15h250v-300h500v300h100l200 -200v-269l-103 -103l-170 170l-298 -298h-329v-400h-150q-21 0 -35.5 14.5t-14.5 35.5zM600 1000h100v200h-100v-200zM700 133l170 170l-170 170l127 127l170 -170l170 170l127 -128l-170 -169l170 -170 l-127 -127l-170 170l-170 -170z" />
|
||||
<glyph unicode="" d="M0 150v1000q0 20 14.5 35t35.5 15h250v-300h500v300h100l200 -200v-300h-400v-200h-500v-400h-150q-21 0 -35.5 14.5t-14.5 35.5zM600 300l300 -300l300 300h-200v300h-200v-300h-200zM600 1000v200h100v-200h-100z" />
|
||||
<glyph unicode="" d="M0 150v1000q0 20 14.5 35t35.5 15h250v-300h500v300h100l200 -200v-402l-200 200l-298 -298h-402v-400h-150q-21 0 -35.5 14.5t-14.5 35.5zM600 300h200v-300h200v300h200l-300 300zM600 1000v200h100v-200h-100z" />
|
||||
<glyph unicode="" d="M0 250q0 -21 14.5 -35.5t35.5 -14.5h1100q21 0 35.5 14.5t14.5 35.5v550h-1200v-550zM0 900h1200v150q0 21 -14.5 35.5t-35.5 14.5h-1100q-21 0 -35.5 -14.5t-14.5 -35.5v-150zM100 300v200h400v-200h-400z" />
|
||||
<glyph unicode="" d="M0 400l300 298v-198h400v-200h-400v-198zM100 800v200h100v-200h-100zM300 800v200h100v-200h-100zM500 800v200h400v198l300 -298l-300 -298v198h-400zM800 300v200h100v-200h-100zM1000 300h100v200h-100v-200z" />
|
||||
<glyph unicode="" d="M100 700v400l50 100l50 -100v-300h100v300l50 100l50 -100v-300h100v300l50 100l50 -100v-400l-100 -203v-447q0 -21 -14.5 -35.5t-35.5 -14.5h-200q-21 0 -35.5 14.5t-14.5 35.5v447zM800 597q0 -29 10.5 -55.5t25 -43t29 -28.5t25.5 -18l10 -5v-397q0 -21 14.5 -35.5 t35.5 -14.5h200q21 0 35.5 14.5t14.5 35.5v1106q0 31 -18 40.5t-44 -7.5l-276 -116q-25 -17 -43.5 -51.5t-18.5 -65.5v-359z" />
|
||||
<glyph unicode="" d="M100 0h400v56q-75 0 -87.5 6t-12.5 44v394h500v-394q0 -38 -12.5 -44t-87.5 -6v-56h400v56q-4 0 -11 0.5t-24 3t-30 7t-24 15t-11 24.5v888q0 22 25 34.5t50 13.5l25 2v56h-400v-56q75 0 87.5 -6t12.5 -44v-394h-500v394q0 38 12.5 44t87.5 6v56h-400v-56q4 0 11 -0.5 t24 -3t30 -7t24 -15t11 -24.5v-888q0 -22 -25 -34.5t-50 -13.5l-25 -2v-56z" />
|
||||
<glyph unicode="" d="M0 300q0 -41 29.5 -70.5t70.5 -29.5h300q41 0 70.5 29.5t29.5 70.5v500q0 41 -29.5 70.5t-70.5 29.5h-300q-41 0 -70.5 -29.5t-29.5 -70.5v-500zM100 100h400l200 200h105l295 98v-298h-425l-100 -100h-375zM100 300v200h300v-200h-300zM100 600v200h300v-200h-300z M100 1000h400l200 -200v-98l295 98h105v200h-425l-100 100h-375zM700 402v163l400 133v-163z" />
|
||||
<glyph unicode="" d="M16.5 974.5q0.5 -21.5 16 -90t46.5 -140t104 -177.5t175 -208q103 -103 207.5 -176t180 -103.5t137 -47t92.5 -16.5l31 1l163 162q17 18 13.5 41t-22.5 37l-192 136q-19 14 -45 12t-42 -19l-118 -118q-142 101 -268 227t-227 268l118 118q17 17 20 41.5t-11 44.5 l-139 194q-14 19 -36.5 22t-40.5 -14l-162 -162q-1 -11 -0.5 -32.5z" />
|
||||
<glyph unicode="" d="M0 50v212q0 20 10.5 45.5t24.5 39.5l365 303v50q0 4 1 10.5t12 22.5t30 28.5t60 23t97 10.5t97 -10t60 -23.5t30 -27.5t12 -24l1 -10v-50l365 -303q14 -14 24.5 -39.5t10.5 -45.5v-212q0 -21 -14.5 -35.5t-35.5 -14.5h-1100q-20 0 -35 14.5t-15 35.5zM0 712 q0 -21 14.5 -33.5t34.5 -8.5l202 33q20 4 34.5 21t14.5 38v146q141 24 300 24t300 -24v-146q0 -21 14.5 -38t34.5 -21l202 -33q20 -4 34.5 8.5t14.5 33.5v200q-6 8 -19 20.5t-63 45t-112 57t-171 45t-235 20.5q-92 0 -175 -10.5t-141.5 -27t-108.5 -36.5t-81.5 -40 t-53.5 -36.5t-31 -27.5l-9 -10v-200z" />
|
||||
<glyph unicode="" d="M100 0v100h1100v-100h-1100zM175 200h950l-125 150v250l100 100v400h-100v-200h-100v200h-200v-200h-100v200h-200v-200h-100v200h-100v-400l100 -100v-250z" />
|
||||
<glyph unicode="" d="M100 0h300v400q0 41 -29.5 70.5t-70.5 29.5h-100q-41 0 -70.5 -29.5t-29.5 -70.5v-400zM500 0v1000q0 41 29.5 70.5t70.5 29.5h100q41 0 70.5 -29.5t29.5 -70.5v-1000h-300zM900 0v700q0 41 29.5 70.5t70.5 29.5h100q41 0 70.5 -29.5t29.5 -70.5v-700h-300z" />
|
||||
<glyph unicode="" d="M-100 300v500q0 124 88 212t212 88h700q124 0 212 -88t88 -212v-500q0 -124 -88 -212t-212 -88h-700q-124 0 -212 88t-88 212zM100 200h900v700h-900v-700zM200 300h300v300h-200v100h200v100h-300v-300h200v-100h-200v-100zM600 300h200v100h100v300h-100v100h-200v-500 zM700 400v300h100v-300h-100z" />
|
||||
<glyph unicode="" d="M-100 300v500q0 124 88 212t212 88h700q124 0 212 -88t88 -212v-500q0 -124 -88 -212t-212 -88h-700q-124 0 -212 88t-88 212zM100 200h900v700h-900v-700zM200 300h100v200h100v-200h100v500h-100v-200h-100v200h-100v-500zM600 300h200v100h100v300h-100v100h-200v-500 zM700 400v300h100v-300h-100z" />
|
||||
<glyph unicode="" d="M-100 300v500q0 124 88 212t212 88h700q124 0 212 -88t88 -212v-500q0 -124 -88 -212t-212 -88h-700q-124 0 -212 88t-88 212zM100 200h900v700h-900v-700zM200 300h300v100h-200v300h200v100h-300v-500zM600 300h300v100h-200v300h200v100h-300v-500z" />
|
||||
<glyph unicode="" d="M-100 300v500q0 124 88 212t212 88h700q124 0 212 -88t88 -212v-500q0 -124 -88 -212t-212 -88h-700q-124 0 -212 88t-88 212zM100 200h900v700h-900v-700zM200 550l300 -150v300zM600 400l300 150l-300 150v-300z" />
|
||||
<glyph unicode="" d="M-100 300v500q0 124 88 212t212 88h700q124 0 212 -88t88 -212v-500q0 -124 -88 -212t-212 -88h-700q-124 0 -212 88t-88 212zM100 200h900v700h-900v-700zM200 300v500h700v-500h-700zM300 400h130q41 0 68 42t27 107t-28.5 108t-66.5 43h-130v-300zM575 549 q0 -65 27 -107t68 -42h130v300h-130q-38 0 -66.5 -43t-28.5 -108z" />
|
||||
<glyph unicode="" d="M-100 300v500q0 124 88 212t212 88h700q124 0 212 -88t88 -212v-500q0 -124 -88 -212t-212 -88h-700q-124 0 -212 88t-88 212zM100 200h900v700h-900v-700zM200 300h300v300h-200v100h200v100h-300v-300h200v-100h-200v-100zM601 300h100v100h-100v-100zM700 700h100 v-400h100v500h-200v-100z" />
|
||||
<glyph unicode="" d="M-100 300v500q0 124 88 212t212 88h700q124 0 212 -88t88 -212v-500q0 -124 -88 -212t-212 -88h-700q-124 0 -212 88t-88 212zM100 200h900v700h-900v-700zM200 300h300v400h-200v100h-100v-500zM301 400v200h100v-200h-100zM601 300h100v100h-100v-100zM700 700h100 v-400h100v500h-200v-100z" />
|
||||
<glyph unicode="" d="M-100 300v500q0 124 88 212t212 88h700q124 0 212 -88t88 -212v-500q0 -124 -88 -212t-212 -88h-700q-124 0 -212 88t-88 212zM100 200h900v700h-900v-700zM200 700v100h300v-300h-99v-100h-100v100h99v200h-200zM201 300v100h100v-100h-100zM601 300v100h100v-100h-100z M700 700v100h200v-500h-100v400h-100z" />
|
||||
<glyph unicode="" d="M4 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM186 600q0 -171 121.5 -292.5t292.5 -121.5t292.5 121.5t121.5 292.5t-121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM400 500v200 l100 100h300v-100h-300v-200h300v-100h-300z" />
|
||||
<glyph unicode="" d="M0 600q0 162 80 299t217 217t299 80t299 -80t217 -217t80 -299t-80 -299t-217 -217t-299 -80t-299 80t-217 217t-80 299zM182 600q0 -171 121.5 -292.5t292.5 -121.5t292.5 121.5t121.5 292.5t-121.5 292.5t-292.5 121.5t-292.5 -121.5t-121.5 -292.5zM400 400v400h300 l100 -100v-100h-100v100h-200v-100h200v-100h-200v-100h-100zM700 400v100h100v-100h-100z" />
|
||||
<glyph unicode="" d="M-14 494q0 -80 56.5 -137t135.5 -57h222v300h400v-300h128q120 0 205 86.5t85 207.5t-85 207t-205 86q-46 0 -90 -14q-44 97 -134.5 156.5t-200.5 59.5q-152 0 -260 -107.5t-108 -260.5q0 -25 2 -37q-66 -14 -108.5 -67.5t-42.5 -122.5zM300 200h200v300h200v-300h200 l-300 -300z" />
|
||||
<glyph unicode="" d="M-14 494q0 -80 56.5 -137t135.5 -57h8l414 414l403 -403q94 26 154.5 104.5t60.5 178.5q0 120 -85 206.5t-205 86.5q-46 0 -90 -14q-44 97 -134.5 156.5t-200.5 59.5q-152 0 -260 -107.5t-108 -260.5q0 -25 2 -37q-66 -14 -108.5 -67.5t-42.5 -122.5zM300 200l300 300 l300 -300h-200v-300h-200v300h-200z" />
|
||||
<glyph unicode="" d="M100 200h400v-155l-75 -45h350l-75 45v155h400l-270 300h170l-270 300h170l-300 333l-300 -333h170l-270 -300h170z" />
|
||||
<glyph unicode="" d="M121 700q0 -53 28.5 -97t75.5 -65q-4 -16 -4 -38q0 -74 52.5 -126.5t126.5 -52.5q56 0 100 30v-306l-75 -45h350l-75 45v306q46 -30 100 -30q74 0 126.5 52.5t52.5 126.5q0 24 -9 55q50 32 79.5 83t29.5 112q0 90 -61.5 155.5t-150.5 71.5q-26 89 -99.5 145.5 t-167.5 56.5q-116 0 -197.5 -81.5t-81.5 -197.5q0 -4 1 -11.5t1 -11.5q-14 2 -23 2q-74 0 -126.5 -52.5t-52.5 -126.5z" />
|
||||
</font>
|
||||
</defs></svg>
|
Before Width: | Height: | Size: 62 KiB |
Before Width: | Height: | Size: 32 KiB |
Before Width: | Height: | Size: 283 KiB |
Before Width: | Height: | Size: 169 KiB |
Before Width: | Height: | Size: 138 KiB |
Before Width: | Height: | Size: 112 KiB |
2276
src/output/theme/js/bootstrap.js
vendored
7
src/output/theme/js/bootstrap.min.js
vendored
7
src/output/theme/js/clean-blog.min.js
vendored
9190
src/output/theme/js/jquery.js
vendored
5
src/output/theme/js/jquery.min.js
vendored
1
src/themes/cleanblog/.gitignore
vendored
@ -1 +0,0 @@
|
||||
example
|