From 6a50b1634da24a929567c84a5b5b7da23553a760 Mon Sep 17 00:00:00 2001 From: Blog Creator Date: Tue, 12 Aug 2025 03:51:24 +0000 Subject: [PATCH 1/4] 'GPT-OSS: EEE concerns explored deeply. ' --- src/content/gpt_oss__is_it_eee.md | 60 +++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 src/content/gpt_oss__is_it_eee.md diff --git a/src/content/gpt_oss__is_it_eee.md b/src/content/gpt_oss__is_it_eee.md new file mode 100644 index 0000000..5befca8 --- /dev/null +++ b/src/content/gpt_oss__is_it_eee.md @@ -0,0 +1,60 @@ +# GPT OSS - Is it EEE? + +## The Big Announcement: OpenAI Drops GPT-OSS + +So, OpenAI has gone and done it again. This week, they dropped a bombshell into the AI community with the release of **GPT-OSS**, a pair of open-source models that supposedly can do everything from outperforming their own proprietary models to running on a GPU that costs less than your first car. Let me break this down. + +### The Two Models: 120B and 20B + +OpenAI has released **two models**: + +* **gpt-oss-120b**: A monster with 120 billion parameters. They claim it "achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU." Sounds impressive, right? Except that GPU is a beast. You're looking at something like an A100 or H100, which are the kind of hardware that would make your average developer cry into their coffee. +* **gpt-oss-20b**: The smaller sibling, with 20 billion parameters. OpenAI says it "delivers similar results to OpenAI o3-mini on common benchmarks and can run on edge devices with just 16 GB of memory." Oh, how cute. This one could run on a Raspberry Pi if it weren't for the fact that 16 GB of memory is still more than most of us have on our laptops. + +And both models supposedly perform strongly on **tool use**, **few-shot function calling**, **CoT reasoning**, and even **HealthBench** (which is a benchmark for medical AI). They even say it outperforms **proprietary models like OpenAI o1 and GPT-4o**. That's a bold claim, but hey, if it's true, it's a game-changer. + +## Ollama Goes All-In on GPT-OSS + +Now, if you're not familiar with **Ollama**, let me give you a quick intro. It's a tool that lets you run large language models locally on your machine, and it's been a bit of a darling in the open-source AI community. Well, Ollama just released **version 0.11**, and it's basically a love letter to GPT-OSS. They've optimized the hell out of it to make sure GPT-OSS runs as smoothly as possible on your local machine. That's impressive, but it also raises a few questions. Why is Ollama so excited about this? What's in it for them? And more importantly, what's in it for us? + +## The Big Question: Is This EEE? + +Now, here's where I start to get a little uneasy. You see, **OpenAI is a Microsoft subsidiary**. They've been around for a while, and they've had their fair share of controversy. There was the whole **DeepSeek** thing, where investors got so worked up about it that it sent **Microsoft and NVIDIA stocks** into a temporary freefall. That's not something you want to mess with. + +And then there's the **Embrace, Extend, Extinguish** strategy. If you're not familiar with it, it's a classic Microsoft tactic. Here's how it works: + +1. **Embrace** a technology or standard that's popular. +2. **Extend** it with your own proprietary features. +3. **Extinguish** the original standard by making your extended version the de facto standard. + +So, is this what OpenAI is doing with GPT-OSS? Are they trying to **suck in the developer market** and **lock them in** with their own tools and services? Because if that's the case, it's a very clever move. + +## The Developer Market: A Battle for the Future + +Let's be honest, **OpenAI has been losing ground** in the developer market. Google's **Gemma** and Microsoft's **Phi** series have been doing a decent job of capturing the attention of developers. And then there's **Qwen**, the fully open-source model from Alibaba, which has been doing surprisingly well. Even **Claude** from Anthropic is starting to make waves. + +These models are **good**. They're **fast**. They're **efficient**. And they're **open-source**. That's a big deal. Because when you're a developer, you want to **control your own stack**. You don't want to be locked into a proprietary system that's going to cost you a fortune in API calls. + +But here's the thing: **OpenAI is still the king of the consumer market**. They've got the **brand recognition**, the **marketing budget**, and the **influencers**. They're the **McDonald's of AI**. You can't ignore them. They're everywhere. + +## The Future of AI: Not Just Chatbots + +Now, here's where I think the **real future of AI** lies. It's not just about **chatbots**. It's not just about **answering silly questions**. It's about **building tools** that can **summarize**, **write**, **structure text**, and **provide the glue** between our services. That's where the **value** is. That's where the **money** is. And that's where **OpenAI has been missing in action**. + +Sure, they've got **GPT-4**, and it's **amazing**. But when you're trying to build a **real-world application**, you don't want to be paying **$1 per token**. You want something that's **fast**, **efficient**, and **open-source**. That's why models like **Gemma**, **Phi**, and **Qwen** are starting to **take off**. They're the **tools** that developers want. They're the **ones** that can be **integrated** into **real applications**. + +## Testing GPT-OSS: A Journalist's Perspective + +Now, I'm not here to **bash** OpenAI. I'm a **journalist**, and I want to **understand** what's going on. So, I've been **testing GPT-OSS** myself. I've been running it on my local machine, and I have to say, it's **impressive**. It's **fast**, **efficient**, and **capable**. It's not perfect, but it's **getting close**. + +And that's what scares me. Because if this model is **as good as they say**, and if it's **running on a single GPU**, then we're looking at a **new era** in AI. One where **developers** can **build tools** that are **as powerful as GPT-4**, but **without the cost**. + +But here's the catch: **OpenAI is still a US company**. And they're still **controlled by Microsoft**. That means we're still looking at a **monopoly** in the AI space. A **monopoly** that's **controlled by a single company** with **deep pockets**. And that's the **real problem**. Because when you have a **monopoly**, you don't get **innovation**. You get **stagnation**. You get **high prices**. You get **lock-in**. + +## The Road Ahead: Open Source or Die + +So, where do we go from here? Well, I think it's **time for the AI community** to **embrace open source**. To **reject the monopolies**. To **build tools** that are **fast**, **efficient**, and **open**. Because if we don't, we're going to end up in a **world** where the **only possible LLM** is the one created by a **US monopoly**. And that's not a **world** I want to live in. + +So, let's **embrace open source**. Let's **build tools** that are **fast**, **efficient**, and **open**. Let's **reject the monopolies**. Let's **build a future** where **developers** can **innovate**, **create**, and **build** without being **locked in** to a **proprietary system**. Because that's the **future** we want. That's the **future** we deserve. + +And if OpenAI wants to **play in that future**, they're going to have to **embrace open source**. Because that's the **only way** to **win**. Otherwise, they're going to be left **holding the bag**, just like **Microsoft** did with **Windows 95**. And that's not a **future** I want to see. -- 2.39.5 From e06768e3d7aabd2018711b082a3efee1d36aa2f9 Mon Sep 17 00:00:00 2001 From: Blog Creator Date: Tue, 12 Aug 2025 20:37:25 +0000 Subject: [PATCH 2/4] 'GPT-OSS: Examining OpenAI's new play. ' --- src/content/gpt_oss__is_it_eee.md | 61 +++++++------------------------ 1 file changed, 13 insertions(+), 48 deletions(-) diff --git a/src/content/gpt_oss__is_it_eee.md b/src/content/gpt_oss__is_it_eee.md index 5befca8..649f8a9 100644 --- a/src/content/gpt_oss__is_it_eee.md +++ b/src/content/gpt_oss__is_it_eee.md @@ -1,60 +1,25 @@ -# GPT OSS - Is it EEE? +# GPT OSS - Is It EEE? -## The Big Announcement: OpenAI Drops GPT-OSS +## Introduction: The Return of OpenAI with GPT-OSS -So, OpenAI has gone and done it again. This week, they dropped a bombshell into the AI community with the release of **GPT-OSS**, a pair of open-source models that supposedly can do everything from outperforming their own proprietary models to running on a GPU that costs less than your first car. Let me break this down. +This week, the AI world got a bit more exciting—or maybe just a bit more chaotic—with the release of **GPT-OSS** from OpenAI. Like a long-lost friend bringing a gift, they’ve dropped two models that promise to shake things up: a 120b parameter model and a 20b model. Now, I’m not here to tell you if it’s good or bad—I’ll leave that to the benchmark tests—but what I can say is that it’s certainly got everyone talking. For those of us who’ve been tinkering with AI locally (and maybe even burning a few GPUs in the process), this feels like OpenAI finally *gets* it. They’re offering models that can run on everything from edge devices to beefy GPUs, which is kind of like finding out your favorite barista started making flat whites at home. It’s convenient, but does it mean they’re trying to corner the market? -### The Two Models: 120B and 20B +## The Rivalry: OpenAI vs. the Rest -OpenAI has released **two models**: +Now, let’s not pretend this is a surprise party. OpenAI has been the belle of the ball for years, but lately, the competition has stepped up their game. Google’s **Gemma** and Microsoft’s **Phi series** have been making waves with open-source models that rival GPT-4 in performance. Then there’s DeepSeek and Qwen, who’ve been serving up solid results but come with their own set of baggage—like trying to herd a群 kangaroos on the highway. The developer community has been experimenting with these alternatives for months now. I mean, why wouldn’t they? Local models like Qwen are like having your own personal barista—you don’t have to wait in line, and you can tweak the blend to your liking. But OpenAI’s GPT-OSS is trying to change the game by offering something familiar yet fresh—like a new pair of thongs after a long winter. -* **gpt-oss-120b**: A monster with 120 billion parameters. They claim it "achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running efficiently on a single 80 GB GPU." Sounds impressive, right? Except that GPU is a beast. You're looking at something like an A100 or H100, which are the kind of hardware that would make your average developer cry into their coffee. -* **gpt-oss-20b**: The smaller sibling, with 20 billion parameters. OpenAI says it "delivers similar results to OpenAI o3-mini on common benchmarks and can run on edge devices with just 16 GB of memory." Oh, how cute. This one could run on a Raspberry Pi if it weren't for the fact that 16 GB of memory is still more than most of us have on our laptops. +## The Embrace, Extend, Extinguish Strategy: A Familiar Playbook -And both models supposedly perform strongly on **tool use**, **few-shot function calling**, **CoT reasoning**, and even **HealthBench** (which is a benchmark for medical AI). They even say it outperforms **proprietary models like OpenAI o1 and GPT-4o**. That's a bold claim, but hey, if it's true, it's a game-changer. +Now, here’s where things get interesting. OpenAI, being a Microsoft subsidiary, knows a thing or two about business strategies. Remember the old saying “embrace, extend, extinguish”? It’s like when your mate buys you a drink at the pub, then adds some extra ice cubes to make it colder—only to take over the whole bar eventually. With GPT-OSS, OpenAI is embracing the open-source movement that others have already started. They’re extending their reach by offering models that are not only competitive but also easier to deploy locally. And if they play their cards right, they might just extinguish the competition by making their version the go-to choice for developers. But here’s the kicker: OpenAI has always been a bit of a luxury brand in the AI world. Their models are great, but they come with a hefty price tag—like buying a Rolex when you could’ve gotten a decent watch for half the price. Now, with GPT-OSS, they’re lowering the barrier to entry, which is great for developers but might also be a ploy to lock them in. -## Ollama Goes All-In on GPT-OSS +## Ollama Steps In: The Enthusiast’s Best Friend -Now, if you're not familiar with **Ollama**, let me give you a quick intro. It's a tool that lets you run large language models locally on your machine, and it's been a bit of a darling in the open-source AI community. Well, Ollama just released **version 0.11**, and it's basically a love letter to GPT-OSS. They've optimized the hell out of it to make sure GPT-OSS runs as smoothly as possible on your local machine. That's impressive, but it also raises a few questions. Why is Ollama so excited about this? What's in it for them? And more importantly, what's in it for us? +If there’s one thing that solidifies OpenAI’s move, it’s **Ollama**. They’ve released version 0.11, which is practically a love letter to GPT-OSS. It’s optimized for these models, making it easier than ever to run them locally. For developers, this feels like finding out your favorite café now does delivery—except instead of coffee, they’re serving up AI models. But here’s the catch: Ollama is already a hero in the developer community. They’ve been working tirelessly to make local AI accessible to everyone, and now they’re doubling down on OpenAI’s models. It’s like having your favorite band release an album that samples their greatest hits—except instead of music, it’s AI tools. -## The Big Question: Is This EEE? +## The Developer Takeover: Where the Real Action Is -Now, here's where I start to get a little uneasy. You see, **OpenAI is a Microsoft subsidiary**. They've been around for a while, and they've had their fair share of controversy. There was the whole **DeepSeek** thing, where investors got so worked up about it that it sent **Microsoft and NVIDIA stocks** into a temporary freefall. That's not something you want to mess with. +Let’s get real for a second. The future of AI isn’t in chatbots that can tell you how many kangaroos are in a jar (spoiler: it’s not easy to count). It’s in developers building tools that integrate AI into everyday apps—like having your phone call a plumber for you or your fridge order groceries automatically. For years, OpenAI has been the belle of the ball, but they’ve missed out on this developer-driven innovation. Now, with GPT-OSS, they’re trying to make up for lost time. But the question is: will developers buy in? The answer seems to be a resounding “maybe.” On one hand, OpenAI’s models are powerful and familiar. On the other hand, there’s a growing contingent of developers who’ve already found success with alternatives like Claude and Qwen. These models might not have all the bells and whistles, but they’re free—and that’s hard to beat. -And then there's the **Embrace, Extend, Extinguish** strategy. If you're not familiar with it, it's a classic Microsoft tactic. Here's how it works: +## Conclusion: The Road Ahead -1. **Embrace** a technology or standard that's popular. -2. **Extend** it with your own proprietary features. -3. **Extinguish** the original standard by making your extended version the de facto standard. - -So, is this what OpenAI is doing with GPT-OSS? Are they trying to **suck in the developer market** and **lock them in** with their own tools and services? Because if that's the case, it's a very clever move. - -## The Developer Market: A Battle for the Future - -Let's be honest, **OpenAI has been losing ground** in the developer market. Google's **Gemma** and Microsoft's **Phi** series have been doing a decent job of capturing the attention of developers. And then there's **Qwen**, the fully open-source model from Alibaba, which has been doing surprisingly well. Even **Claude** from Anthropic is starting to make waves. - -These models are **good**. They're **fast**. They're **efficient**. And they're **open-source**. That's a big deal. Because when you're a developer, you want to **control your own stack**. You don't want to be locked into a proprietary system that's going to cost you a fortune in API calls. - -But here's the thing: **OpenAI is still the king of the consumer market**. They've got the **brand recognition**, the **marketing budget**, and the **influencers**. They're the **McDonald's of AI**. You can't ignore them. They're everywhere. - -## The Future of AI: Not Just Chatbots - -Now, here's where I think the **real future of AI** lies. It's not just about **chatbots**. It's not just about **answering silly questions**. It's about **building tools** that can **summarize**, **write**, **structure text**, and **provide the glue** between our services. That's where the **value** is. That's where the **money** is. And that's where **OpenAI has been missing in action**. - -Sure, they've got **GPT-4**, and it's **amazing**. But when you're trying to build a **real-world application**, you don't want to be paying **$1 per token**. You want something that's **fast**, **efficient**, and **open-source**. That's why models like **Gemma**, **Phi**, and **Qwen** are starting to **take off**. They're the **tools** that developers want. They're the **ones** that can be **integrated** into **real applications**. - -## Testing GPT-OSS: A Journalist's Perspective - -Now, I'm not here to **bash** OpenAI. I'm a **journalist**, and I want to **understand** what's going on. So, I've been **testing GPT-OSS** myself. I've been running it on my local machine, and I have to say, it's **impressive**. It's **fast**, **efficient**, and **capable**. It's not perfect, but it's **getting close**. - -And that's what scares me. Because if this model is **as good as they say**, and if it's **running on a single GPU**, then we're looking at a **new era** in AI. One where **developers** can **build tools** that are **as powerful as GPT-4**, but **without the cost**. - -But here's the catch: **OpenAI is still a US company**. And they're still **controlled by Microsoft**. That means we're still looking at a **monopoly** in the AI space. A **monopoly** that's **controlled by a single company** with **deep pockets**. And that's the **real problem**. Because when you have a **monopoly**, you don't get **innovation**. You get **stagnation**. You get **high prices**. You get **lock-in**. - -## The Road Ahead: Open Source or Die - -So, where do we go from here? Well, I think it's **time for the AI community** to **embrace open source**. To **reject the monopolies**. To **build tools** that are **fast**, **efficient**, and **open**. Because if we don't, we're going to end up in a **world** where the **only possible LLM** is the one created by a **US monopoly**. And that's not a **world** I want to live in. - -So, let's **embrace open source**. Let's **build tools** that are **fast**, **efficient**, and **open**. Let's **reject the monopolies**. Let's **build a future** where **developers** can **innovate**, **create**, and **build** without being **locked in** to a **proprietary system**. Because that's the **future** we want. That's the **future** we deserve. - -And if OpenAI wants to **play in that future**, they're going to have to **embrace open source**. Because that's the **only way** to **win**. Otherwise, they're going to be left **holding the bag**, just like **Microsoft** did with **Windows 95**. And that's not a **future** I want to see. +So, is GPT-OSS EEE? Maybe, maybe not. OpenAI has certainly embraced the open-source movement by releasing these models, but whether they extend or extinguish remains to be seen. For now, it’s a game of wait and see—as we developers know, sometimes the best strategy is to sit back, pour yourself a cup of tea, and watch the show unfold. As for me? I’ll keep my local AI server ready, just in case OpenAI decides to pull a fast one. But until then, I’m here, sipping my flat white, wondering if GPT-OSS will be the new black or just another flavor of the month. -- 2.39.5 From a3efb5128d4f3f2d3ba83245d6694c17759742dc Mon Sep 17 00:00:00 2001 From: Blog Creator Date: Wed, 13 Aug 2025 20:35:16 +0000 Subject: [PATCH 3/4] 'GPT OSS: Is it EEE? ' --- src/content/gpt_oss__is_it_eee.md | 64 +++++++++++++++++++++++++------ 1 file changed, 52 insertions(+), 12 deletions(-) diff --git a/src/content/gpt_oss__is_it_eee.md b/src/content/gpt_oss__is_it_eee.md index 649f8a9..e8b5d1f 100644 --- a/src/content/gpt_oss__is_it_eee.md +++ b/src/content/gpt_oss__is_it_eee.md @@ -1,25 +1,65 @@ # GPT OSS - Is It EEE? -## Introduction: The Return of OpenAI with GPT-OSS +## Introduction: The Return of OpenAI (With Some Questions) -This week, the AI world got a bit more exciting—or maybe just a bit more chaotic—with the release of **GPT-OSS** from OpenAI. Like a long-lost friend bringing a gift, they’ve dropped two models that promise to shake things up: a 120b parameter model and a 20b model. Now, I’m not here to tell you if it’s good or bad—I’ll leave that to the benchmark tests—but what I can say is that it’s certainly got everyone talking. For those of us who’ve been tinkering with AI locally (and maybe even burning a few GPUs in the process), this feels like OpenAI finally *gets* it. They’re offering models that can run on everything from edge devices to beefy GPUs, which is kind of like finding out your favorite barista started making flat whites at home. It’s convenient, but does it mean they’re trying to corner the market? +This week, the AI world got a bit busier than usual. OpenAI dropped their long-awaited **GPT-OSS** models, and it feels like they’re trying to make up for lost time—or maybe just remind everyone that they’re still in the game. The release has sparked a lot of excitement, but also some confusion. Are these models really as good as they claim? And why now? Let’s break this down with all the drama, intrigue, and a dash of humor you’ve come to expect from your friendly neighborhood tech writer. -## The Rivalry: OpenAI vs. the Rest +## What Exactly Is GPT-OSS Anyway? -Now, let’s not pretend this is a surprise party. OpenAI has been the belle of the ball for years, but lately, the competition has stepped up their game. Google’s **Gemma** and Microsoft’s **Phi series** have been making waves with open-source models that rival GPT-4 in performance. Then there’s DeepSeek and Qwen, who’ve been serving up solid results but come with their own set of baggage—like trying to herd a群 kangaroos on the highway. The developer community has been experimenting with these alternatives for months now. I mean, why wouldn’t they? Local models like Qwen are like having your own personal barista—you don’t have to wait in line, and you can tweak the blend to your liking. But OpenAI’s GPT-OSS is trying to change the game by offering something familiar yet fresh—like a new pair of thongs after a long winter. +OpenAI has thrown two models into the ring: -## The Embrace, Extend, Extinguish Strategy: A Familiar Playbook +1. **GPT-oss-120b**: A hefty 120 billion parameter model that they’re claiming can “hold its own” against their own **o4-mini** (which is *incredibly* expensive to run). The kicker? It apparently does this on a single 80GB GPU. That’s impressive if true, but let’s not get carried away just yet. +2. **GPT-oss-20b**: The smaller sibling that’s currently helping me draft this very blog post. OpenAI says it’s on par with their **o3-mini** and can run on a measly 16GB of memory. That makes it perfect for edge devices, local inference, or when you don’t want to spend your life savings on cloud credits. -Now, here’s where things get interesting. OpenAI, being a Microsoft subsidiary, knows a thing or two about business strategies. Remember the old saying “embrace, extend, extinguish”? It’s like when your mate buys you a drink at the pub, then adds some extra ice cubes to make it colder—only to take over the whole bar eventually. With GPT-OSS, OpenAI is embracing the open-source movement that others have already started. They’re extending their reach by offering models that are not only competitive but also easier to deploy locally. And if they play their cards right, they might just extinguish the competition by making their version the go-to choice for developers. But here’s the kicker: OpenAI has always been a bit of a luxury brand in the AI world. Their models are great, but they come with a hefty price tag—like buying a Rolex when you could’ve gotten a decent watch for half the price. Now, with GPT-OSS, they’re lowering the barrier to entry, which is great for developers but might also be a ploy to lock them in. +Both models are also supposed to be ace at tool use, few-shot function calling, CoT reasoning, and even health-related tasks—outperforming some proprietary models like GPT-4 in certain cases. Impressive? Sure. But let’s not forget that OpenAI has a history of making bold claims. -## Ollama Steps In: The Enthusiast’s Best Friend +## The Great AI Model Exodus: Why We’re Here -If there’s one thing that solidifies OpenAI’s move, it’s **Ollama**. They’ve released version 0.11, which is practically a love letter to GPT-OSS. It’s optimized for these models, making it easier than ever to run them locally. For developers, this feels like finding out your favorite café now does delivery—except instead of coffee, they’re serving up AI models. But here’s the catch: Ollama is already a hero in the developer community. They’ve been working tirelessly to make local AI accessible to everyone, and now they’re doubling down on OpenAI’s models. It’s like having your favorite band release an album that samples their greatest hits—except instead of music, it’s AI tools. +Over the past year or so, the AI community has been moving away from GPT-based models—not because they were bad (they weren’t), but because they were closed-source and expensive to use at scale. Developers wanted more control, transparency, and affordability. Enter the rise of open-source and open-weight models like: -## The Developer Takeover: Where the Real Action Is +* **Google’s Gemini (Gemma)** series +* **Microsoft’s Phi** series (yes, that Microsoft—ironically, OpenAI is a subsidiary) +* The **Qwen** series from DeepSeek +* And others like **Llama** and **Vicuna** -Let’s get real for a second. The future of AI isn’t in chatbots that can tell you how many kangaroos are in a jar (spoiler: it’s not easy to count). It’s in developers building tools that integrate AI into everyday apps—like having your phone call a plumber for you or your fridge order groceries automatically. For years, OpenAI has been the belle of the ball, but they’ve missed out on this developer-driven innovation. Now, with GPT-OSS, they’re trying to make up for lost time. But the question is: will developers buy in? The answer seems to be a resounding “maybe.” On one hand, OpenAI’s models are powerful and familiar. On the other hand, there’s a growing contingent of developers who’ve already found success with alternatives like Claude and Qwen. These models might not have all the bells and whistles, but they’re free—and that’s hard to beat. +These models have been a breath of fresh air for developers. They’re free to use, tweak, and integrate into projects without worrying about pesky API limits or astronomical costs. It’s like the AI world finally got its own version of Linux—except with neural networks. But then OpenAI showed up with GPT-OSS. And now everyone is asking: Why? -## Conclusion: The Road Ahead +## Is This an Embrace-Extend-Extinguish Play? -So, is GPT-OSS EEE? Maybe, maybe not. OpenAI has certainly embraced the open-source movement by releasing these models, but whether they extend or extinguish remains to be seen. For now, it’s a game of wait and see—as we developers know, sometimes the best strategy is to sit back, pour yourself a cup of tea, and watch the show unfold. As for me? I’ll keep my local AI server ready, just in case OpenAI decides to pull a fast one. But until then, I’m here, sipping my flat white, wondering if GPT-OSS will be the new black or just another flavor of the month. +Ah, the classic **Embrace, Extend, Extinguish** strategy. If you’re not familiar, it’s a business tactic where a company adopts (embrace) an existing standard or technology, extends it with their own features, and then slowly extinguishes the competition by making their version incompatible or superior. + +Now, I’m not accusing OpenAI of anything here—just pointing out that they’re a Microsoft subsidiary, and Microsoft has a history of such strategies. Whether this is intentional or just good business sense is up for debate. But let’s think about it: + +* OpenAI has dominated the consumer AI market with their **ChatGPT** and other tools. +* They’ve been losing ground in the developer market, where models like Gemini and Claude (Anthropic) are gaining traction. +* Now they’re releasing open-source models that promise to compete at GPT-4 levels. + +The timing feels a bit too convenient. OpenAI is essentially saying: “We get it. You want local, affordable, and flexible AI? We’ve got you covered.” But will this be enough to win back the developer community? Or are they just delaying the inevitable? + +## The Real Power of Local Models + +Let’s not sugarcoat it: For developers, the real value of AI isn’t in chatbots or viral social media trends. It’s in building tools that can automate, analyze, and enhance existing workflows. Think: + +* Summarizing thousands of documents in seconds. +* Automating customer support with natural language processing. +* Creating dynamic content for apps and websites on the fly. + +This is where AI shines—and where OpenAI has been dropping the ball. Their focus on consumer-facing tools like ChatGPT has made them a household name, but it’s also left developers feeling overlooked. Now, with GPT-OSS, OpenAI is trying to bridge that gap. But will they succeed? Or are they just too late to the party? + +## The Dark Side of Monocultures + +One thing I’m deeply concerned about is the potential for a monoculture in AI. If OpenAI manages to dominate the open-source space with GPT-OSS, we could end up in a world where everyone uses variations of the same model. It’s not just about vendor lock-in—it’s about stifling innovation. When every developer uses the same tools and approaches, we lose the diversity that drives progress. + +I want to see a future where there are **multiple open-source models**, each with their own strengths and weaknesses. That way, developers can choose what works best for their needs instead of being forced into one ecosystem. + +## Testing the Waters: My Journey With GPT-OSS + +As I sit here typing this blog post, I’m using a model very similar to GPT-oss-20b as my AI assistant. It’s fast, it’s local, and it’s surprisingly good at generating content. But is it better than alternatives like Claude or Gemini? That’s the million-dollar question. + +I’ve spent the past few weeks testing out various models for my own projects, and I can say this much: GPT-OSS feels like a solid contender. It’s fast, easy to integrate, and—dare I say it—fun to work with. But until I put it head-to-head with other models, I won’t be ready to crown it the king of AI. + +## Final Thoughts: The Future of AI is in Our Hands + +The release of GPT-OSS is a big deal—not just for OpenAI, but for the entire AI community. It’s a reminder that even closed-source giants can (and should) listen to their users. But let’s not get carried away. OpenAI isn’t the only game in town anymore. Models like Gemini, Claude, and Qwen are proving that diversity is key to innovation. + +As developers, we have the power to choose which models succeed—and by extension, shape the future of AI. Let’s make sure we’re making choices that benefit the community as a whole, not just a single company. After all, the last thing we need is another **AI monoculture**. -- 2.39.5 From f7cd3f7ba7ff6515ce13644e45a6c066f248c0d1 Mon Sep 17 00:00:00 2001 From: armistace Date: Thu, 14 Aug 2025 11:41:46 +1000 Subject: [PATCH 4/4] Human edits to GPT-OSS Is it EEE had to make some edits to the article and add the human intro --- src/content/gpt_oss__is_it_eee.md | 41 +++++++++++++++++++++++-------- 1 file changed, 31 insertions(+), 10 deletions(-) diff --git a/src/content/gpt_oss__is_it_eee.md b/src/content/gpt_oss__is_it_eee.md index e8b5d1f..f29941e 100644 --- a/src/content/gpt_oss__is_it_eee.md +++ b/src/content/gpt_oss__is_it_eee.md @@ -1,8 +1,29 @@ +Title: GPT OSS - Is It Embrace, Extend, Extenguish +Date: 2025-08-12 20:00 +Modified: 2025-08-14 20:00 +Category: Politics, Tech, AI +Tags: politics, tech, Ai +Slug: social-media-ban-fail +Authors: Andrew Ridgway +Summary: GPT OSS is here from Open AI, the first open weight model from them since GPT-2. My question is... why now? + +# Human Introduction +This has been a tough one for the publishing house to get right. I've had it generate 3 different drafts and this is still the result of quite the edit. Today's blog was written by: +1. Gemma:27b - Editor +2. GPT-OSS - Journalist +3. Qwen3:14b - Journalist +4. phi4:latest - Journalist +5. deepseek-r1:14b - journalist + +The big change from last time is the addition of gpt-oss, which is of course the focus of hte topic today. It's quite the open weight model, haven't played with the tooling yet but I'm exceited to see what it can do, even if I do have questions. + +Anyways, without further ado! GPT-OSS is it EEE? written by AI... For AI? + # GPT OSS - Is It EEE? ## Introduction: The Return of OpenAI (With Some Questions) -This week, the AI world got a bit busier than usual. OpenAI dropped their long-awaited **GPT-OSS** models, and it feels like they’re trying to make up for lost time—or maybe just remind everyone that they’re still in the game. The release has sparked a lot of excitement, but also some confusion. Are these models really as good as they claim? And why now? Let’s break this down with all the drama, intrigue, and a dash of humor you’ve come to expect from your friendly neighborhood tech writer. +This week, the AI world got a bit busier than usual. OpenAI dropped their [**GPT-OSS**](https://openai.com/index/introducing-gpt-oss/) models, and it feels like they’re trying to make up for lost time—or maybe just remind everyone that they’re still in the game. The release has sparked a lot of excitement, but also some confusion. Are these models really as good as they claim? And why now? Let’s break this down with all the drama, intrigue, and a dash of humor you’ve come to expect from your friendly neighborhood tech writer. ## What Exactly Is GPT-OSS Anyway? @@ -19,8 +40,8 @@ Over the past year or so, the AI community has been moving away from GPT-based m * **Google’s Gemini (Gemma)** series * **Microsoft’s Phi** series (yes, that Microsoft—ironically, OpenAI is a subsidiary) -* The **Qwen** series from DeepSeek -* And others like **Llama** and **Vicuna** +* The **Qwen** series +* And others like **Llama** and **Deepseek** These models have been a breath of fresh air for developers. They’re free to use, tweak, and integrate into projects without worrying about pesky API limits or astronomical costs. It’s like the AI world finally got its own version of Linux—except with neural networks. But then OpenAI showed up with GPT-OSS. And now everyone is asking: Why? @@ -31,8 +52,8 @@ Ah, the classic **Embrace, Extend, Extinguish** strategy. If you’re not famili Now, I’m not accusing OpenAI of anything here—just pointing out that they’re a Microsoft subsidiary, and Microsoft has a history of such strategies. Whether this is intentional or just good business sense is up for debate. But let’s think about it: * OpenAI has dominated the consumer AI market with their **ChatGPT** and other tools. -* They’ve been losing ground in the developer market, where models like Gemini and Claude (Anthropic) are gaining traction. -* Now they’re releasing open-source models that promise to compete at GPT-4 levels. +* They’ve been losing ground in the developer market, where models like [Gemini](https://deepmind.google/models/gemini/pro/) and particularly [Claude (Anthropic)](https://claude.ai/) are gaining traction in the proprietary space. +* Now they’re releasing open-source models that promise to compete at GPT-4 levels to try and bring in the Deepseek and Qwen crowd. The timing feels a bit too convenient. OpenAI is essentially saying: “We get it. You want local, affordable, and flexible AI? We’ve got you covered.” But will this be enough to win back the developer community? Or are they just delaying the inevitable? @@ -44,22 +65,22 @@ Let’s not sugarcoat it: For developers, the real value of AI isn’t in chatbo * Automating customer support with natural language processing. * Creating dynamic content for apps and websites on the fly. -This is where AI shines—and where OpenAI has been dropping the ball. Their focus on consumer-facing tools like ChatGPT has made them a household name, but it’s also left developers feeling overlooked. Now, with GPT-OSS, OpenAI is trying to bridge that gap. But will they succeed? Or are they just too late to the party? +This is where AI shines—and where OpenAI has been losing market and mind share. Their focus on consumer-facing tools like ChatGPT has made them a household name, but it’s also left developers feeling overlooked. Now, with GPT-OSS, OpenAI is trying to bridge that gap. But will they succeed? Or are they just too late to the party? ## The Dark Side of Monocultures One thing I’m deeply concerned about is the potential for a monoculture in AI. If OpenAI manages to dominate the open-source space with GPT-OSS, we could end up in a world where everyone uses variations of the same model. It’s not just about vendor lock-in—it’s about stifling innovation. When every developer uses the same tools and approaches, we lose the diversity that drives progress. -I want to see a future where there are **multiple open-source models**, each with their own strengths and weaknesses. That way, developers can choose what works best for their needs instead of being forced into one ecosystem. +I want to see a future where there are **multiple open-source or at the very least open weight models**, each with their own strengths and weaknesses. That way, developers can choose what works best for their needs instead of being forced into one ecosystem. ## Testing the Waters: My Journey With GPT-OSS -As I sit here typing this blog post, I’m using a model very similar to GPT-oss-20b as my AI assistant. It’s fast, it’s local, and it’s surprisingly good at generating content. But is it better than alternatives like Claude or Gemini? That’s the million-dollar question. +This blog post was partly written by GPT-oss-20b. It’s fast, it’s local, and it’s surprisingly good at generating content. But is it better than open weight alternatives like Deepseek or Gemma (the open weight gemini)? That’s the million-dollar question. -I’ve spent the past few weeks testing out various models for my own projects, and I can say this much: GPT-OSS feels like a solid contender. It’s fast, easy to integrate, and—dare I say it—fun to work with. But until I put it head-to-head with other models, I won’t be ready to crown it the king of AI. +I’ve been testing out various models for my own projects, and I can say this much: GPT-OSS feels like a solid contender. It’s fast, easy to integrate, and—dare I say it—fun to work with. But until I put it head-to-head with other models, I won’t be ready to crown it the king of AI. ## Final Thoughts: The Future of AI is in Our Hands -The release of GPT-OSS is a big deal—not just for OpenAI, but for the entire AI community. It’s a reminder that even closed-source giants can (and should) listen to their users. But let’s not get carried away. OpenAI isn’t the only game in town anymore. Models like Gemini, Claude, and Qwen are proving that diversity is key to innovation. +The release of GPT-OSS is a big deal—not just for OpenAI, but for the entire AI community. It’s a reminder that even closed-source giants can (and should) listen to their users. But let’s not get carried away. OpenAI isn’t the only game in town anymore. Models like Gemini, Claude in the proprietary space, and Qwen and Llama in open source space are proving that diversity is key to innovation. As developers, we have the power to choose which models succeed—and by extension, shape the future of AI. Let’s make sure we’re making choices that benefit the community as a whole, not just a single company. After all, the last thing we need is another **AI monoculture**. -- 2.39.5