NEW OpenAI Update is INSANE (FREE)! 🤯
14:38

NEW OpenAI Update is INSANE (FREE)! 🤯

Julian Goldie SEO 07.04.2025 8 400 просмотров 178 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
🚀 Get a FREE SEO strategy Session + Discount Now: https://go.juliangoldie.com/strategy-session Want to get more customers, make more profit & save 100s of hours with AI? Join me in the AI Profit Boardroom: https://go.juliangoldie.com/ai-profit-boardroom 🤯 Want more money, traffic and sales from SEO? Join the SEO Elite Circle👇 https://go.juliangoldie.com/register 🤖 Need AI Automation Services? Book an AI Discovery Session Here: https://juliangoldieaiautomation.com/ Click below for FREE access to ✅ 50 FREE AI SEO TOOLS 🔥 200+ AI SEO Prompts! 📈 FREE AI SEO COMMUNITY with 2,000 SEOs ! 🚀 Free AI SEO Course 🏆 Plus TODAY's Video NOTES... https://go.juliangoldie.com/chat-gpt-prompts - Want a Custom GPT built? Order here: https://kwnyzkju.manus.space/ - Join our FREE AI SEO Accelerator here: https://www.facebook.com/groups/aiseomastermind - Need consulting? Book a call with us here: https://link.juliangoldie.com/widget/bookings/seo-gameplanesov12 Unveiling Quas Alpha: The Stealth OpenAI Model Revolutionizing AI & Coding In this episode, we dive into the recent updates to OpenAI's Quas Alpha—a stealth AI model that holds incredible potential for coding and long-context tasks. With a 1 million token context window, zero-cost API access, and remarkable speed, Quas Alpha stands out as a game-changing tool. We discuss the key indicators suggesting Quas Alpha's ties to OpenAI, its performance benchmarks, and its applications in Visual Studio Code and other environments. Plus, we walk through setting up APIs using Quas Alpha, including an MCP server with Perplexity, showcasing its superior capabilities for coding projects. Join us to explore this groundbreaking AI model and how it can elevate your projects. 00:00 Introduction to Quas Alpha and OpenAI Update 00:17 Evidence Supporting Quas Alpha as an OpenAI Model 01:19 Exploring Quas Alpha on Open Router 02:30 Quas Alpha's Capabilities and Performance 05:21 User Feedback and Comparisons 06:55 Setting Up and Using Quas Alpha for Coding 09:46 Integrating Quas Alpha with Perplexity for Enhanced Functionality 13:19 Conclusion and Community Resources

Оглавление (8 сегментов)

  1. 0:00 Introduction to Quas Alpha and OpenAI Update 57 сл.
  2. 0:17 Evidence Supporting Quas Alpha as an OpenAI Model 163 сл.
  3. 1:19 Exploring Quas Alpha on Open Router 228 сл.
  4. 2:30 Quas Alpha's Capabilities and Performance 569 сл.
  5. 5:21 User Feedback and Comparisons 293 сл.
  6. 6:55 Setting Up and Using Quas Alpha for Coding 537 сл.
  7. 9:46 Integrating Quas Alpha with Perplexity for Enhanced Functionality 694 сл.
  8. 13:19 Conclusion and Community Resources 297 сл.
0:00

Introduction to Quas Alpha and OpenAI Update

The new chatt openai update is absolutely insane. So today what we're going to be talking about is Qua alpha and if you want to okay is this definitely open AI or not I would say there's a 90% probability according to chat GPT which actually did the research on this and said based on the available
0:17

Evidence Supporting Quas Alpha as an OpenAI Model

information it is highly probable that qua alpha which is a stealth model you can see it's a cloaked model right now and I'll show you some other examples in a minute is an open AI model. All right. So, they've said several indicators support this conclusion, including the upstream ID in Qua Alpha's generation. Metadata starts with chat CMPL, a prefix typically associated with OpenAI's API outputs. The tool call ID format used by Quaz Alpha aligns with OpenAI site style differing from formats used by other companies including Google or Mistral. Clustering analyses of model outputs reveal that Quaz alpha's responses are closely related to those of OpenAI's GPT 4. 5 preview, suggesting a common origin, and users have reported that when query directly, which we'll test in a second, Qua Alpha identifies OpenAI as its creator. All right, considering these factors, there is approximately a 90% probability that Quaz Alpha is an OpenAI model. All right, we'll test out today
1:19

Exploring Quas Alpha on Open Router

and I'll show you how you can build things directly with it. But if we go into the chat on open routter, so it's available at open router AI for free, right? You can use this for free and you can see here zero m per tokens and zero cost per output tokens as well. This is completely free and you can see exactly what sort of outputs you get here. So you get a context window of 1 million tokens which is absolutely insane. You see how popular it is right now. So for example, root code and client also if we go to the open router homepage you can see that quaza alpha is popping off right now and then additionally if we go inside the chat here and we say okay what model are you mate and we'll find out and it says I'm based on the GPT architecture. I don't think it's going to tell me exactly what model you are. It's not going to give me the output but you can see it says I'm based on OpenAI's GP4 architecture. My specific deployment is often referred to as chat GPT powered by GPT4. If you're using me via OpenAI services, I'm typically the GPT4 Turbo variant, which is optimized to be faster and more efficient whilst maintaining higher quality responses. How can I
2:30

Quas Alpha's Capabilities and Performance

assist you further? Now, this might mean that it's for example like a GP4. 5 Turbo variant or something like that. If we ask it, okay, are you related to GPT 4. 5? We'll test it out and it data seems to be cut off October 2023. So it doesn't seem to be able to understand itself or what it is etc. But it is pretty awesome. This is a free API number one. So chat GBT has never released a free API before which is very unusual. Number two, this is available via the API and the chat. And number three, it's got a million token context window which also makes it different. And number four is extremely fast when you're going back and forth with it. Faster than most APIs I've ever used. And I test APIs daily. It's also, for example, better at reasoning. So you can see it pushes for deeper logic right here. It's better for tool use as well. It's absolutely popping off on open router as well. It's stealthrained and fine-tuned. So it's not been officially announced, which is quite different as well. So it could be like a shadow test before it's publicly rolled out. And it's very similar to open ice output style, but it has higher levels of creativity. All right. And this is according to chat cheaper directly. So the TLDDR here is a is likely a stealth open model. It's built to test improvements in reasoning, tool use, speed, and cost efficiency. And it's possibly a prototype for GPT5 or a refined GPT4. 5. Right? If you've ever used GPT4. 5, it is super slow. The other difference here of course is that you can actually inside open router you can click on web search here and then you can use the web search feature. So if we say okay what happened today let's see if this connects to web search you can select web search here and we'll see if it can actually go straight here. So you can see you can connect it to what happened today and that sort of thing and that is based on April the 8th 2015. Why is it giving me results from 2015? All right. So, maybe the web search doesn't work right there, but at least we tested it and found that out. Open rig to letting me down on the web search right there. You can also change the settings here. So, you can, for example, import, export, get markdown, clear the models, clear the chat. You can also, for example, compare this. So, you can compare Quaz alpha versus other models. So, for example, you could compare it to chat GPT40. But yeah, this is absolutely awesome. All right. The fact that you can have a higher context window as well is going to make it much better for coding but also for using it directly inside the chat here. And you can see the speed of this, right? So if we say okay create a selfplaying snake game directly inside here, you can see how fast it codes, right? It's really quick. It's almost at the same level as chat GPT40 when it comes to outputs. Very quick at coding, right? Whereas, for example, if you're using CL 3. 7 Sonnet and stuff like that, typically it's going to take a longer time to generate
5:21

User Feedback and Comparisons

the responses. Now, here's what other people are saying about this. So, Matthew Burman says Quaz Alpha is here and it's the AI industry's best kept secret. So, you can see some examples of it here. Some people are also saying this is probably from OpenAI as well. So, it's tool core ID format matches o OpenAI is not Google or Mistrows. So, you can see an example here. So if we look at the screenshot, great tweet here from lowam. But you can see here, for example, this is qua alpha and the call ID. Whereas for example, if you compare it to open AI's 40 mini, very similar call ID, right? Whereas if you look at Google, it has tool o perform research and the same for mistress. Totally different call ID, right? This is also the first ever stealth model from Qua Alpha. So, it's a pre-release of an upcoming long context foundation model. I honestly 90% believe 99 maybe 95% believe this is going to be a OpenAI release. All right, 1 million context length specifically for coding and is available for free right here. And it's the first time that a foundation model has been released publicly like this. Some people also saying that it's 40 fine-tuned for coding as you can see right here and you can see what you can build with it, right? So, for example, it's seems to be really good at coding. It's beating a lot of benchmarks right here. So, you can see it's in the charts, which is pretty insane when it comes to the top models. And also, if you want to use this inside other models, you can just grab a key. So, you can grab an API key
6:55

Setting Up and Using Quas Alpha for Coding

from your settings for free inside open router. And then, if you go over to Rode or whatever tool you want to use for coding. So, let's go into Visual Studio Code and then open up Rode. We can start using Quaza Alpha. All right. So let's go to recode over here. Select our settings. We'll select open router. And then down here, select qua alpha. We'll just go with the default model for now. And then we'll select open router and type in qua. Hit save. Hit done. And then we can build whatever we want. Right? So for example, if we say make a quick SEO cost calculator and then we just go back to the settings. If you select quaz alpha, that is free to use. So, we'll hit save, hit done, make sure we selected that, hit enter. We'll run the API request, hit approve, and then we can start coding out right here. And then you can see the code is generated. So, it's super fast when it generates responses. And also, the good thing about using root code is like you're coding locally, right? So, you can save the outputs you've got. And there you can see, job done. Boom. Let's open it up. There we go. Cost calculator ready to go. Let's calculate the cost. Works perfectly. SEO cost calculator done, right? Really powerful stuff and it's so easy. By the way, if you want some SPs on like how to set up client for example or R code, we actually have SOPs inside the AI profit boardroom. Check the link in the comments description. If you want to see the client SAP, you can go here or here. We have multiple SAPs on client tutorials inside there. And additionally, if you want to get set up on Root Code, you can see how to set up Root Code with computer use and boomerang tasks directly inside the AI profit boardroom. So, pretty powerful stuff. Really good inside the chat directly here and also super powerful inside Visual Studio Code directly here as well. And then you can just code out and build whatever you want for free. What would be interesting to see is can we actually set up MCPS as well directly with this. So, if we go to client, I'm just going to check my MCPS tabs. We've got no MCP setup yet. So, we'll type in Plexi and we'll try and install Perplexity using Qua Alpha. And then we're just going to grab an API key directly from Plexi in a second. And if we go to the MCP marketplace inside Klein, type in Plexi, hit install. And there we go. Right, we can just install this stuff for free using this. So, we're going to run that command and then I know in a second it's going to ask me for the API key for Plexi. So, if we want to do that, just go to your settings inside Plexi here. Then go to the API key section over here on the left hand side. And then once you've done that, you can just create a new API key. So, I'll delete this after the
9:46

Integrating Quas Alpha with Perplexity for Enhanced Functionality

video, but we'll grab that API key right here. run this command and we can start setting up the MCP server. Now, the reason that I would use Flexi for an MCP for example is if you look at this OpenAI release, right, this Quaza alpha, if you look at it, it's not connected to the internet and it's cut off, right? So, it's actually cut off to October 2023. So, if you give it the Plexi API MCP, then all of a sudden that brings it up to like modern speed. It can run searches from today. And I'll show you an example of that in a minute. So, I'm just going to put API key equals and we'll plug that into the API key section over here. Again, this has cost us nothing to set up because we're using Quaz Alpha to get this set up. All right, wait for that to load. We've given it the API key over here. Now, it's going to start setting that up. Now, one thing to bear in mind as well is like when you're setting up API keys like this with MCPS, sometimes it can come back with errors. It seems to have worked perfectly this time which is blowing my mind to be honest with you but we'll see whether this works. All right. So now it's running a query using plexi and that has worked beautifully. Look at that. It's worked perfectly. All right. So you can set up mcps as well directly with qua alpha as well. So now what we're going to say is okay what is qua alpha? Bear in mind on the chat inside open router it's limited. So it's not going to know what quaser alpha is, right? Whereas if you go inside client, set up the MCP server with perplexity, it's all of a sudden going to have a lot more knowledge because it's connected to the internet, right? Which makes it better for building, especially if you're using other APIs to build whatever you want. So what we'll say now is what is Quaz Alpha? Do some research with Plexity and we'll test the MCP server that we've just set up for free using Quaz Alpha. All right. Now, when you're using Plexi, obviously it's going to charge you credits on Plexi side, which is super cheap anyway, but the API using Qua Alpha is completely free to use. All right, so wait for that to load right here. In the meantime, if we go back to open router chat and we'll say what is quasa alpha do some research with the plexi, it doesn't have a clue and it's just going to hallucinate, right? So that's why you want to use a plexi API here, right? So for example, we say what is quaz alpha do some research with plexi is sure mate here's what we found even though I just stuck my finger in the air and totally hallucinated it and it says qua alpha appears to be an advanced large language model of plexi which is not clearly not right whereas if we go back to klein as you can see right here it understands exactly what qua alpha is so it says it's a stealth AI model released since April 2025 by open router featuring a 1 million token context window optimized for coding and large and long context tasks. It generates code two to three times faster than claw 3. 5 Sonic. Excels at building complex websites simulations and supports multimodal inputs. Benchmarks show it rivals top models like DC version 3 and Gemini 2. 5 Ultra. While its exact origin is undisclosed, evidence suggests ties to major labs like open air or Google. developers can access it via open routters API and chat interface making it a powerful tool for rapid prototyping and educational projects. So basically, you've just created an MCP server. You've just leveraged an MCP server directly inside Client using Qua Alpha. It was completely free to set that up. You can code with this. You can build with it. You can use the chat directly
13:19

Conclusion and Community Resources

if you want. And it's completely free. Plus, super fast, plus a million tokens. Absolutely awesome to be honest with you. That's pretty much it. Now, if you want to get a full recipe for this stuff, feel free to join the AI profit boardroom. We have a bunch of different piece right here. And also, if you ever get stuck using this sort of stuff, you can jump on the weekly Q& A. This community, the AI profit boardroom, is all about making more money and saving time with AI. So, feel free to join link in the comments description and if you ever get stuck in between the Q& A calls, you can ask directly inside the community here. Ask any questions that you have and we'll be happy to help you and just get back to you, right? So, there's 694 people you can ask for help when you're struggling with AI. Also, on top of that, it comes with all my best automations for AI agents, workflows, templates, etc. All the stuff like create locally and quickly is released here. All right. Additionally, if you want to get a free one-to-one SEO strategy session, feel free to get that link in the comments description. We'll show you how we take websites from zero to 145,000 business month and generate hundreds of thousands of dollars in sales and autos free link building acceleration session. You'll get a free custom tailored game plan showing you exactly how to get more leads, traffic, and sales for free from Google to your website to make more money based on what's working for us and our happy clients like you can see right here. Feel free to get that link in the comments description. I appreciate you watching.

Ещё от Julian Goldie SEO

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться