From 5567aa625dbc87d2ee2a5823b373a5e3fe6854a7 Mon Sep 17 00:00:00 2001 From: Andrew Ridgway Date: Wed, 13 Mar 2024 09:49:19 +1000 Subject: [PATCH] cv Blog --- src/__pycache__/devpelconf.cpython-311.pyc | Bin 687 -> 696 bytes src/content/cover_letter.md | 33 ++++ src/output/archives.html | 4 +- src/output/author/andrew-ridgway.html | 15 +- src/output/authors.html | 2 +- src/output/categories.html | 1 + .../category/business-intelligence.html | 2 +- src/output/category/resume.html | 165 ++++++++++++++++ src/output/cover-letter.html | 183 ++++++++++++++++++ src/output/feeds/all-en.atom.xml | 14 +- src/output/feeds/all.atom.xml | 14 +- src/output/feeds/andrew-ridgway.atom.xml | 14 +- src/output/feeds/andrew-ridgway.rss.xml | 2 +- .../feeds/business-intelligence.atom.xml | 2 +- src/output/feeds/resume.atom.xml | 14 ++ src/output/index.html | 15 +- src/output/metabase-duckdb.html | 4 +- src/output/tag/cover-letter.html | 0 src/output/tag/resume.html | 0 src/output/tags.html | 2 + 20 files changed, 474 insertions(+), 12 deletions(-) create mode 100644 src/content/cover_letter.md create mode 100644 src/output/category/resume.html create mode 100644 src/output/cover-letter.html create mode 100644 src/output/feeds/resume.atom.xml create mode 100644 src/output/tag/cover-letter.html create mode 100644 src/output/tag/resume.html diff --git a/src/__pycache__/devpelconf.cpython-311.pyc b/src/__pycache__/devpelconf.cpython-311.pyc index 42fc1debba4b53d73bfa00cd6e0499e35e207af9..19f3dd4805b3156588612e18ae465516f24d4814 100644 GIT binary patch delta 51 zcmZ3_x`UN_IWI340}$+e{$V3G50jL!enx(7s(xZoZf0>wVsff}L26NPeqLgZ?qm(7 Fb^w+A5VrsT delta 42 wcmdnNx}KGLIWI340}$8>8*Sv~VG`8R&&bbB)lV$S%`7fSOitCGY{=9O0M4}w3jhEB diff --git a/src/content/cover_letter.md b/src/content/cover_letter.md new file mode 100644 index 0000000..aba086e --- /dev/null +++ b/src/content/cover_letter.md @@ -0,0 +1,33 @@ +Title: A Cover Letter +Date: 2024-02-23 20:00 +Modified: 2024-03-13 20:00 +Category: Resume +Tags: Cover Letter, Resume +Slug: cover-letter +Authors: Andrew Ridgway +Summary: A Summary of what I've done and Where I'd like to go for prospective Employers + +To whom it may concern + +My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career. + +I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. + +In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. + +I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline. + +In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. + +Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here: +(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator] +(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator] + + +I look forward to hearing from you soon. + +Sincerely, +_________________ +Andrew Ridgway + + diff --git a/src/output/archives.html b/src/output/archives.html index 67e0951..9291e33 100644 --- a/src/output/archives.html +++ b/src/output/archives.html @@ -82,7 +82,9 @@
-
Wed 18 October 2023
+
Fri 23 February 2024
+
A Cover Letter
+
Wed 15 November 2023
Metabase and DuckDB
Tue 23 May 2023
Implmenting Appflow in a Production Datalake
diff --git a/src/output/author/andrew-ridgway.html b/src/output/author/andrew-ridgway.html index aa82a0f..7da3568 100644 --- a/src/output/author/andrew-ridgway.html +++ b/src/output/author/andrew-ridgway.html @@ -81,6 +81,19 @@
+
+ +

+ A Cover Letter +

+
+

A Summary of what I've done and Where I'd like to go for prospective Employers

+ +
+

@@ -90,7 +103,7 @@

Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible


diff --git a/src/output/authors.html b/src/output/authors.html index 7861d42..634e2af 100644 --- a/src/output/authors.html +++ b/src/output/authors.html @@ -84,7 +84,7 @@

- Andrew Ridgway (3) + Andrew Ridgway (4)

diff --git a/src/output/categories.html b/src/output/categories.html index 0ea8d98..9f0f553 100644 --- a/src/output/categories.html +++ b/src/output/categories.html @@ -84,6 +84,7 @@
diff --git a/src/output/category/business-intelligence.html b/src/output/category/business-intelligence.html index 834ab3e..bb4911e 100644 --- a/src/output/category/business-intelligence.html +++ b/src/output/category/business-intelligence.html @@ -91,7 +91,7 @@

Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible


diff --git a/src/output/category/resume.html b/src/output/category/resume.html new file mode 100644 index 0000000..3fda447 --- /dev/null +++ b/src/output/category/resume.html @@ -0,0 +1,165 @@ + + + + + + + + + + + Andrew Ridgway's Blog - Articles in the Resume category + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+
+

Articles in the Resume category

+
+
+
+
+
+ + +
+
+
+
+ +

+ A Cover Letter +

+
+

A Summary of what I've done and Where I'd like to go for prospective Employers

+ +
+
+ + +
    + +
+ Page 1 / 1 +
+
+
+
+ +
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/src/output/cover-letter.html b/src/output/cover-letter.html new file mode 100644 index 0000000..7cfc935 --- /dev/null +++ b/src/output/cover-letter.html @@ -0,0 +1,183 @@ + + + + + + + + + + + Andrew Ridgway's Blog + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+
+
+

A Cover Letter

+ Posted by + Andrew Ridgway + on Fri 23 February 2024 + + +
+
+
+
+
+ + +
+
+
+ +
+

To whom it may concern

+

My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.

+

I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker.

+

In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies.

+

I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.

+

In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light.

+

Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here: +(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator] +(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]

+

I look forward to hearing from you soon.

+

Sincerely,

+
+

Andrew Ridgway

+
+ +
+ +
+
+
+ +
+ + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/src/output/feeds/all-en.atom.xml b/src/output/feeds/all-en.atom.xml index fb2318e..40e5202 100644 --- a/src/output/feeds/all-en.atom.xml +++ b/src/output/feeds/all-en.atom.xml @@ -1,5 +1,17 @@ -Andrew Ridgway's Bloghttp://localhost:8000/2023-10-18T20:00:00+10:00Metabase and DuckDB2023-10-18T20:00:00+10:002023-10-18T20:00:00+10:00Andrew Ridgwaytag:localhost,2023-10-18:/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p> +Andrew Ridgway's Bloghttp://localhost:8000/2024-03-13T20:00:00+10:00A Cover Letter2024-02-23T20:00:00+10:002024-03-13T20:00:00+10:00Andrew Ridgwaytag:localhost,2024-02-23:/cover-letter.html<p>A Summary of what I've done and Where I'd like to go for prospective Employers</p><p>To whom it may concern</p> +<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p> +<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p> +<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p> +<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p> +<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p> +<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here: +(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator] +(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]</p> +<p>I look forward to hearing from you soon.</p> +<p>Sincerely,</p> +<hr> +<p>Andrew Ridgway</p>Metabase and DuckDB2023-11-15T20:00:00+10:002023-11-15T20:00:00+10:00Andrew Ridgwaytag:localhost,2023-11-15:/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p> <p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p> <h3>The Beginnings of an Idea</h3> <p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p> diff --git a/src/output/feeds/all.atom.xml b/src/output/feeds/all.atom.xml index 6c9a868..2b187d7 100644 --- a/src/output/feeds/all.atom.xml +++ b/src/output/feeds/all.atom.xml @@ -1,5 +1,17 @@ -Andrew Ridgway's Bloghttp://localhost:8000/2023-10-18T20:00:00+10:00Metabase and DuckDB2023-10-18T20:00:00+10:002023-10-18T20:00:00+10:00Andrew Ridgwaytag:localhost,2023-10-18:/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p> +Andrew Ridgway's Bloghttp://localhost:8000/2024-03-13T20:00:00+10:00A Cover Letter2024-02-23T20:00:00+10:002024-03-13T20:00:00+10:00Andrew Ridgwaytag:localhost,2024-02-23:/cover-letter.html<p>A Summary of what I've done and Where I'd like to go for prospective Employers</p><p>To whom it may concern</p> +<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p> +<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p> +<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p> +<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p> +<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p> +<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here: +(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator] +(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]</p> +<p>I look forward to hearing from you soon.</p> +<p>Sincerely,</p> +<hr> +<p>Andrew Ridgway</p>Metabase and DuckDB2023-11-15T20:00:00+10:002023-11-15T20:00:00+10:00Andrew Ridgwaytag:localhost,2023-11-15:/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p> <p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p> <h3>The Beginnings of an Idea</h3> <p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p> diff --git a/src/output/feeds/andrew-ridgway.atom.xml b/src/output/feeds/andrew-ridgway.atom.xml index 1ca8a78..d66ab42 100644 --- a/src/output/feeds/andrew-ridgway.atom.xml +++ b/src/output/feeds/andrew-ridgway.atom.xml @@ -1,5 +1,17 @@ -Andrew Ridgway's Blog - Andrew Ridgwayhttp://localhost:8000/2023-10-18T20:00:00+10:00Metabase and DuckDB2023-10-18T20:00:00+10:002023-10-18T20:00:00+10:00Andrew Ridgwaytag:localhost,2023-10-18:/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p> +Andrew Ridgway's Blog - Andrew Ridgwayhttp://localhost:8000/2024-03-13T20:00:00+10:00A Cover Letter2024-02-23T20:00:00+10:002024-03-13T20:00:00+10:00Andrew Ridgwaytag:localhost,2024-02-23:/cover-letter.html<p>A Summary of what I've done and Where I'd like to go for prospective Employers</p><p>To whom it may concern</p> +<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p> +<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p> +<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p> +<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p> +<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p> +<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here: +(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator] +(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]</p> +<p>I look forward to hearing from you soon.</p> +<p>Sincerely,</p> +<hr> +<p>Andrew Ridgway</p>Metabase and DuckDB2023-11-15T20:00:00+10:002023-11-15T20:00:00+10:00Andrew Ridgwaytag:localhost,2023-11-15:/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p> <p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p> <h3>The Beginnings of an Idea</h3> <p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p> diff --git a/src/output/feeds/andrew-ridgway.rss.xml b/src/output/feeds/andrew-ridgway.rss.xml index ffdc260..7edd2c5 100644 --- a/src/output/feeds/andrew-ridgway.rss.xml +++ b/src/output/feeds/andrew-ridgway.rss.xml @@ -1,2 +1,2 @@ -Andrew Ridgway's Blog - Andrew Ridgwayhttp://localhost:8000/Wed, 18 Oct 2023 20:00:00 +1000Metabase and DuckDBhttp://localhost:8000/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p>Andrew RidgwayWed, 18 Oct 2023 20:00:00 +1000tag:localhost,2023-10-18:/metabase-duckdb.htmlBusiness Intelligencedata engineeringMetabaseDuckDBembeddedImplmenting Appflow in a Production Datalakehttp://localhost:8000/appflow-production.html<p>How Appflow simplified a major extract layer and when I choose Managed Services</p>Andrew RidgwayTue, 23 May 2023 20:00:00 +1000tag:localhost,2023-05-23:/appflow-production.htmlData Engineeringdata engineeringAmazonManaged ServicesDawn of another blog attempthttp://localhost:8000/how-i-built-the-damn-thing.html<p>Containers and How I take my learnings from home and apply them to work</p>Andrew RidgwayWed, 10 May 2023 20:00:00 +1000tag:localhost,2023-05-10:/how-i-built-the-damn-thing.htmlData Engineeringdata engineeringcontainers \ No newline at end of file +Andrew Ridgway's Blog - Andrew Ridgwayhttp://localhost:8000/Wed, 13 Mar 2024 20:00:00 +1000A Cover Letterhttp://localhost:8000/cover-letter.html<p>A Summary of what I've done and Where I'd like to go for prospective Employers</p>Andrew RidgwayFri, 23 Feb 2024 20:00:00 +1000tag:localhost,2024-02-23:/cover-letter.htmlResumeCover LetterResumeMetabase and DuckDBhttp://localhost:8000/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p>Andrew RidgwayWed, 15 Nov 2023 20:00:00 +1000tag:localhost,2023-11-15:/metabase-duckdb.htmlBusiness Intelligencedata engineeringMetabaseDuckDBembeddedImplmenting Appflow in a Production Datalakehttp://localhost:8000/appflow-production.html<p>How Appflow simplified a major extract layer and when I choose Managed Services</p>Andrew RidgwayTue, 23 May 2023 20:00:00 +1000tag:localhost,2023-05-23:/appflow-production.htmlData Engineeringdata engineeringAmazonManaged ServicesDawn of another blog attempthttp://localhost:8000/how-i-built-the-damn-thing.html<p>Containers and How I take my learnings from home and apply them to work</p>Andrew RidgwayWed, 10 May 2023 20:00:00 +1000tag:localhost,2023-05-10:/how-i-built-the-damn-thing.htmlData Engineeringdata engineeringcontainers \ No newline at end of file diff --git a/src/output/feeds/business-intelligence.atom.xml b/src/output/feeds/business-intelligence.atom.xml index aa85b8e..18b2805 100644 --- a/src/output/feeds/business-intelligence.atom.xml +++ b/src/output/feeds/business-intelligence.atom.xml @@ -1,5 +1,5 @@ -Andrew Ridgway's Blog - Business Intelligencehttp://localhost:8000/2023-10-18T20:00:00+10:00Metabase and DuckDB2023-10-18T20:00:00+10:002023-10-18T20:00:00+10:00Andrew Ridgwaytag:localhost,2023-10-18:/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p> +Andrew Ridgway's Blog - Business Intelligencehttp://localhost:8000/2023-11-15T20:00:00+10:00Metabase and DuckDB2023-11-15T20:00:00+10:002023-11-15T20:00:00+10:00Andrew Ridgwaytag:localhost,2023-11-15:/metabase-duckdb.html<p>Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible</p><p>Ahhhh <a href="https://duckdb.org/">DuckDB</a> if you're even partly floating around in the data space you've probably been hearing ALOT about it and it's <em>"Datawarehouse on your laptop"</em> mantra. However, the OTHER application that sometimes gets missed is <em>"SQLite for OLAP workloads"</em> and it was this concept that once I grasped it gave me a very interesting idea.... What if we could take the very pretty Aggregate Layer of our Data(warehouse/LakeHouse/Lake) and put that data right next to presentation layer of the lake, reducing network latency and... hopefully... have presentation reports running over very large workloads in the blink of an eye. It might even be fast enough that it could be deployed and embedded </p> <p>However, for this to work we need some form of conatinerised reporting application.... lucky for us there is <a href="https://www.metabase.com/">Metabase</a> which is a fantastic little reporting application that has an open core. So this got me thinking... Can I put these two applications together and create a Reporting Layer with report embedding capabilities that is deployable in the cluster and has a admin UI accesible over a web page all whilst keeping the data locked to our network?</p> <h3>The Beginnings of an Idea</h3> <p>Ok so... Big first question. Can Duckdb and Metabase talk? Well... not quite. But first lets take a quick look at the architecture we'll be employing here </p> diff --git a/src/output/feeds/resume.atom.xml b/src/output/feeds/resume.atom.xml new file mode 100644 index 0000000..93344fc --- /dev/null +++ b/src/output/feeds/resume.atom.xml @@ -0,0 +1,14 @@ + +Andrew Ridgway's Blog - Resumehttp://localhost:8000/2024-03-13T20:00:00+10:00A Cover Letter2024-02-23T20:00:00+10:002024-03-13T20:00:00+10:00Andrew Ridgwaytag:localhost,2024-02-23:/cover-letter.html<p>A Summary of what I've done and Where I'd like to go for prospective Employers</p><p>To whom it may concern</p> +<p>My name is Andrew Ridgway and I am a Data and Technology professional looking to embark on the next step in my career.</p> +<p>I have over 10 years’ experience in System and Data Architecture, Data Modelling and Orchestration, Business and Technical Analysis and System and Development Process Design. Most of this has been in developing Cloud architectures and workloads on AWS and GCP Including ML workloads using Sagemaker. </p> +<p>In my current role I have Proposed, Designed and built the data platform currently used by business. This includes internal and external data products as well as the infrastructure and modelling to support these. This role has seen me liaise with stakeholders of all levels of the business from Analysts in the Customer Experience team right up to C suite executives and preparing material for board members. I understand the complexity of communicating complex system design to different level stakeholders and the complexities of involved in communicating to both technical and less technical employees particularly in relation to data and ML technologies. </p> +<p>I have also worked as a technical consultant to many businesses and have assisted with the design and implementation of systems for a wide range of industries including financial services, mining and retail. I understand the complexities created by regulation in these environments and understand that this can sometimes necessitate the use of technologies and designs, including legacy systems and designs, I wouldn’t normally use. I also have a passion of designing systems that enable these organisations to realise the benefits of CI/CD on workloads they would not traditionally use this capability. In particular I took a very traditional legacy Data Warehousing team and implemented a solution that meant version control was no longer controlled by a daily copy and paste of folders with dates on major updates. My solution involved establishing guidelines of use of git version control so that this could happen automatically as people committed new code to the core code base. As I have moved into cloud architecture I have made sure to use best practice and ensure everything I build isn’t considered production ready until it is in IAC and deployed through a CI/CD pipeline.</p> +<p>In a personal capacity I am an avid tech and ML enthusiast. I have designed my own cluster including monitoring and deployment that runs several services that my family uses including chat and DNS and am in the process of designing a “set and forget” system that will allows me to have multi user tenancies on hardware I operate that should enable us to have the niceties of cloud services like email, storage and scheduling with the safety of knowing where that data is stored and exactly how it is used. I also like to design small IoT devices out of Arduino boards allowing me to monitor and control different facets of our house like temperature and light. </p> +<p>Currently I am working on a project to merge my skill in SQL Modelling and Orchestration with GPT API’s to try and lessen that burden. You can see some of this work in its very early stages here: +(gpt-sql-generator)[https://github.com/armistace/gpt-sql-generator] +(dbt_sources_generator)[https://github.com/armistace/datahub_dbt_sources_generator]</p> +<p>I look forward to hearing from you soon.</p> +<p>Sincerely,</p> +<hr> +<p>Andrew Ridgway</p> \ No newline at end of file diff --git a/src/output/index.html b/src/output/index.html index beaf745..1a8cc4f 100644 --- a/src/output/index.html +++ b/src/output/index.html @@ -84,6 +84,19 @@
+
+ +

+ A Cover Letter +

+
+

A Summary of what I've done and Where I'd like to go for prospective Employers

+ +
+

@@ -93,7 +106,7 @@

Using Metabase and DuckDB to create an embedded Reporting Container bringing the data as close to the report as possible


diff --git a/src/output/metabase-duckdb.html b/src/output/metabase-duckdb.html index 635b725..8310b88 100644 --- a/src/output/metabase-duckdb.html +++ b/src/output/metabase-duckdb.html @@ -52,7 +52,7 @@ - + @@ -91,7 +91,7 @@

Metabase and DuckDB

Posted by Andrew Ridgway - on Wed 18 October 2023 + on Wed 15 November 2023
diff --git a/src/output/tag/cover-letter.html b/src/output/tag/cover-letter.html new file mode 100644 index 0000000..e69de29 diff --git a/src/output/tag/resume.html b/src/output/tag/resume.html new file mode 100644 index 0000000..e69de29 diff --git a/src/output/tags.html b/src/output/tags.html index 3bb320e..5d3b848 100644 --- a/src/output/tags.html +++ b/src/output/tags.html @@ -83,11 +83,13 @@

Tags for Andrew Ridgway's Blog

  • Amazon (1)
  • containers (1)
  • +
  • Cover Letter (1)
  • data engineering (3)
  • DuckDB (1)
  • embedded (1)
  • Managed Services (1)
  • Metabase (1)
  • +
  • Resume (1)