# NEW OpenAI Details Reveal NEW ORION MODEL! (Project Strawberry/Q* Star/ OpenAI ORION Model)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=foHjSo0kswU
- **Дата:** 27.08.2024
- **Длительность:** 32:05
- **Просмотры:** 38,413

## Описание

Prepare for AGI with me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

00:00 - Intro: OpenAI's new models
02:33 - Strawberry capabilities
05:00 - NYT Connections puzzle
07:16 - Orion model
08:52 - Model distillation
10:36 - Strawberry in ChatGPT
12:27 - Synthetic data use
15:26 - Reducing hallucinations
18:58 - National security demos
20:50 - Math applications
22:26 - Competitor reasoning engines
23:38 - Sampling and searching
25:47 - AI vs human reasoning
26:50 - Sutskever's research
28:22 - Process supervision
30:57 - Conclusion and summary

Links From Todays Video:
https://www.theinformation.com/articles/openai-races-to-launch-strawberry-reasoning-ai-to-boost-chatbot-business?rc=0g0zvw #
https://openai.com/index/improving-mathematical-reasoning-with-process-supervision/#samples 
https://arxiv.org/pdf/2408.03314v 
https://www.theinformation.com/articles/how-openais-smaller-rivals-are-developing-their-own-ai-that-reasons?rc=0g0zvw 
https://www.theinformation.com/articles/openai-shows-strawberry-ai-to-the-feds-and-uses-it-to-develop-orion?rc=0g0zvw 


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=foHjSo0kswU) Intro: OpenAI's new models

so there has been so much information released regarding open ai's new model and even more information regarding a super secretive model that is called Orion in this video I'm going to dive into absolutely everything you need to know as well as some things that you might have missed so this article from the information clearly gives us a lot of information regarding the strawberry model and you can see right here that they give us some key details about some things that you may have missed so for example right here you can see that the article starts by stating that as opening ey looks to raise more Capital they're trying to launch the next AI product that they believe can reason through tough problems much better than existing AI now this is where it gets interesting because of course now apparently we're going to be getting the launch of the new AI code named Strawberry which was previously called qar pronounced qar now this could be a release which might actually be on top of chat GPT and apparently the release date for this could be as early as the fall now the fall refers to the following months which are September October or November now I am placing my bets on the latter which is November but if there are some sneaky announcements from other companies such as Google or anthropic I can bet you that openi is going to release this a little bit earlier now you can see here that it says that strawberry can solve math problems it hasn't seen before and this is something that today's chatbot cannot reliably do and it has been trained to solve problems involving programming but it's not limited to answering technical questions so one of the overall things you need to understand about strawberry is that this model is a model that is a Reasoner this model has been trained to solve complex problems and this is like their reasoning engine that's going to help them to increase the ability of their other model now the article talks about how when given additional time to think the strawberry model can also answer customers questions about more subjective topics such as product marketing strategies now I think the most interesting thing about this is that when given additional time to think this is a method that you know the article later references which is called test time compute or there are many different ways that you can actually give the model time to think such as using different agentic workflows or different styles of prompting but with

### [2:33](https://www.youtube.com/watch?v=foHjSo0kswU&t=153s) Strawberry capabilities

this model that is a lot slower I'm guessing they've got some kind of internal workings where the model thinks about the problem in an abstract way or whatever way it uses it's able to generate a lot better Answers by doing this now you can see here that there is also a demonstration of strawberries press with language related tasks so open AI employees have shown their co-workers how strawberry can for example solve New York times connections a complex word puzzle so if you don't know what the New York's times connections is it's essentially a word puzzle where you have to draw connections between words of increasing complex so I'll show you guys an example of today's one so what you can see here is that we have 16 different simple words what you'll have to do is you'll have to kind of group these into four different categories so for example you can pause the video now if you want to go ahead and do this but I'm going to show you the first one so for example you can see we've got Thunder then we've got raar then we've got crash and then we've got boom so these are all sounds all are you know all of these are loud sounds so we could group those into a group of four then we would have to look at the remaining words and group those into separate groups of four words now this is quite interesting because I actually did some small tests on these llms the current llms and they currently don't perform that well at this task and even when I asked claw 3. 5 Sonet for those of you who are thinking hm this task that strawberry is so good at I'm wondering if Claude 3. 5 Sonic is even good at this task well I asked Claude 3. 5 Sonic and this model is currently state-of-the-art meaning that it is the smartest model that you can get your hands on you can see right here that what we have is a model that can't really get the answer now what's interesting was that when I asked chat GPT you can see that it managed to blab on about NBA teams then it did get the things that make a loud noise and it said things related to Fire and I'll show you guys the answers in a moment but these aren't the correct answers you can see here that it managed to get the loud sounds and it did manage to get the hair styling one but it wasn't able to get the other areas and it did mess up some key elements now the reason I'm showing you this is because if we can see that strawberry the model is able to do this rather well then that means that this model is clearly much smarter than what other models are currently sitting

### [5:00](https://www.youtube.com/watch?v=foHjSo0kswU&t=300s) NYT Connections puzzle

on and you can see right here these are the answers and none of the models managed to Guess that you know this is chili pepper quality or these are kinds of cards so this is something that I think is quite interesting because it shows us the increasing level of difficulty and how these models are able to reason through different problems now right now there might even be some people that are probably figuring out ways to get an llm to solve this but I do think that if you simply give a problem like this to the model that is strawberry and then it's able to immediately solve this issue then I think that is going to be something that's rather fascinating because it's going to allow us to truly where these models are in terms of their raw capability now what we also have is more information regarding strawberry and agents so it says that the effort to launch strawberry as part of opening eyes NeverEnding battle to stay ahead of other well-funded rivals buying for Supremacy and conversational AI or large language models technology also has implications for future products known as agents that aim to solve multi-step tasks one of the main issues with why we don't have agents yet are essentially you know just one issue and the issue is down to reliability what this means is that we need models that don't make any issues because if High reliability is crucial for AI agents performing long multi-step tasks because these tasks involve a series of actions that build on each other so for example if the AI makes a mistake early on it can throw off the entire process leading to poor outcomes or even complete failure of the task so for example let's think about it like this imagine trying to bake a cake if you mix the wrong ingredients or even set the oven to the wrong temperature at the beginning it doesn't matter how well you follow the rest of the recipe the cake won't turn out right similarly if an AI makes a mistake at any point in a complex task it can ruin everything that comes afterwards and reliable AI ensures that every step is done correctly so that the task is completed correctly just like carefully following each step of recipe to get a perfect cake and

### [7:16](https://www.youtube.com/watch?v=foHjSo0kswU&t=436s) Orion model

without this reliability that strawberry aims to provide the AI can't be trusted to handle important or complicated jobs with multi-step tasks hence the reasoning we don't have reliable AI agents just yet now this is where we introduce Orion so it says here that open AI prospects rest in part on the eventual launch of a new flagship large language model it is currently developing code named ory and that model seeks to improve upon its existing Flagship large language model gp4 which it launched early last year but by now other Rivals have launched llms that perform roughly as well as GPT 4 now this is rather true we actually do have models that are currently better than GPT 4 and on GPT 4's level so interestingly enough it's surprising that open AI isn't rushing to get this model out the door yet now this model code named Orion interestingly enough was actually leaked sometime last year now for those of you who don't pay attention to Jimmy apples you may have missed this tweet in November of 2023 when he tweeted let's conquer the cosmos mood C ious Jimmy now this image might seem random to anyone that views it but this image is actually the constellation of Orion one of the most recognizable and prominent constellations in the night sky so the fact that Jimmy apples tweeted this in November of 2023 and if you're thinking why are you even talking about a random Twitter account tweeting a random image this account has

### [8:52](https://www.youtube.com/watch?v=foHjSo0kswU&t=532s) Model distillation

previously tweeted many different things regarding open Ai and these things were months before they occurred if you're in the AI space you're going to know exactly what I'm talking about but it's just interesting that this was in the works for so long and it's only now that we're starting to get the first details of how these products are being developed now you can see here that it says it isn't clear whether a chatbot version of strawberry that can boost the performance of GPT 4 and chat GPT will be good enough to launch this year the chatbot version is a smaller simplified version of the original strawberry model mod known as distillation and it seeks to maintain the same level of performance as a bigger model while being easier and less costly to operate so I think what openi are trying to do here is that they of course have this specialized model the strawberry model and this is the model that you know of course they could really release it and it would be absolutely incredible in terms of its reasoning capabilities but they're stating that they aren't sure whether or not it's going to be released I'm not sure whether it's safety reasons but whatever but the point is is that they are trying to distill the model down so it works as well as the bigger model while being easier and less costly to operate you have to understand that the inference for these models that are really smart it takes a long time to get the answers and this was something that we saw with Google's models where we had to wait a few minutes to get answers for certain really Advanced systems and I am wondering if they're going to release this distilled version of this strawberry model and apply it to chat gbt or GPT 4 to improve its reasoning capabilities that surpass the other models at this moment in time now what

### [10:36](https://www.youtube.com/watch?v=foHjSo0kswU&t=636s) Strawberry in ChatGPT

we also have here is the fact that this involves the process of distillation which is something that is becoming more and more common in the AI space so you can see here from Google's blog it basically talks about how most fine-tune large language models contain enormous numbers of parameters consequently foundational large language models require enormous computational and environmental resources to generate prediction note that large swaths of those parameters are typically Irrelevant for a specific application now what they're talking about here is that when you distill a model it creates a smaller version of a large language model and the distilled large language model generates predictions much faster and requires fewer computational and environmental resources than the full llm however the distilled models predictions are generally not quite as good is the original llm predict and it says that recall that llms with more parameters almost always generate better predictions than llms with fewer parameters so overall it seems that opening ey is using this entire process here known as distillation to try and sort of get some of the key reasoning abilities of strawberry out to the mass market now I think that this might either be released on top of GPT 4 as a way to Edge its model to be even better than the compet ition but one of the things that I found super interesting about this entire article was the fact that this article doesn't even mention GPT 51 so all of these things that we're talking about these are all separate to GPT 5 so it seems that these models that we're going to be getting are varied in terms of their capabilities and size looks like we're going to be getting GPT 5 an advanced GPT 4 and potentially a Ryan on top of that too

### [12:27](https://www.youtube.com/watch?v=foHjSo0kswU&t=747s) Synthetic data use

now with this delation there is some more interesting stuff as we speak here about the article and it says that with distillation it can be used in a chat-based product before Orion is released and it says this shouldn't come as a surprise given the intensifying competition among the top AI developers and it says we're not sure what a strawberry based product might look like but we can make an educated guess and one obvious idea would be incorporating strawberry's improved reasoning capability into chat GPT however these answers would be more accurate they would also be more slower now the reason for this is of course when you have a model that is thinking through multiple steps if you give it a harder question often the model has to just take more time to think about that problem as would any kind of human who's dealing with any kind of harder problem and this is going to be kind of interesting to see how this evolves in the future especially considering we're getting Advanced chips that allow for faster inference so I am wondering to see if we're going to be getting these improved reasoning capabilities into chat GPT later this year because there aren't any current talks of GPT 5 being released this year and considering that is going to be a flagship product and there are many delays surrounding the model considering all of the things going on around openai I wouldn't be surprised if open aai just incorporate a distilled version of strawberry into the current version of chat GPT just so that they don't lose in terms of their position at the leaderboard one thing you have to know about open AI is that they love the number one spot on the LMS Arena and that Benchmark kind of serves as an anchor for people to position themselves for where they think the best current model is when we actually look at what strawberry might be suited for you can see that it talks about how it's not going to be suited for application where users expect immediate responses like opening eyes search GPT engine which is where you want certain responses that are almost immediate but of course what they are using this model for and what the real applications are going to be for is for sensitive use cases like fixing non-critical coding errors in GitHub so this model is going to be a model that is quite slower but it's going to be a lot more accurate and this is going to be what people tend to use in areas where they need accuracy over speed that's why we have models like Google's Flash and of course GPT 40 mini where you just want to response quickly and you're not that interested whether or not the model is right or wrong you just really want to know if that model is going to give you a response but with these more advanced models the ones that are more advanced at reasoning what we would want is these models to have a lot more accuracy because they're dealing with more sensitive use cases and of course things

### [15:26](https://www.youtube.com/watch?v=foHjSo0kswU&t=926s) Reducing hallucinations

like programming where the accuracy is more important than the speed now what's fascinating about all of this is that they actually talk about synthetic data here and they talk about two strawberries so essentially what we have here is that we have the bigger version of strawy open a eyes Secret model the bigger version of qar being able to generate training data for Orion and I think the implications for that are absolutely incredible so it said open ey is using the bigger version of strawberry to generate training data for Orion said a person with knowledge of the situation that kind of AI generated data is known as synthetic it means that strawberry could help openai overcome limitations on obtaining enough high quality data to train new models from Real World data such as text or images pulled from the intern one of the key things that most people have spoken about in terms of these models and their limitations was the synthetic data area now I recently did a video it isn't released yet but it's a deep dive into all of ai's limitations for the future and one of the limitations was of course the data wall but in many cases the data wall isn't really going to happen because number one we have synthetic data and although some individuals talk about model collapse which is where models trained on synthetic data tend to eat their own tail and then collapse isn't really an issue especially if you manage to filter the data with a high quality AI model or a human and we also have the fact that we haven't actually exhausted all of the data that currently exists so the fascinating thing about this is that there is the raw strawberry model which is apparently large enough to generate probably great reasoning examples that is going to be used in training a and we do know that high quality data is really important for models to succeed with their reasoning tasks and of course high quality output so you can see that the bigger version of strawberry is generating this training data for Orion which brings me to a thought pattern where I'm now thinking about how effective is this model the strawberry slq star model in terms of its reasoning capabilities because it's able to literally generate data for their next iteration of models C Orion so that's going to be absolutely incredible in terms of those implications now what they also talk about is reducing hallucination using strawberry to generate higher quality training data could help open AI reduce the numbers of errors as its models generate otherwise known as hallucinations said Alex Gravely CEO of agent startup minion Ai and former Chief Architect of GitHub co-pilot said imagine a model without hallucinations a model where you ask it a logic puzzle and it's right on the very first try the reason why the model might be able to do that is because there's less ambiguity in the training data so it's guessing a lot less and I think that if a model is able to reduce hallucination severely this is going to actually increase the rate at which AI is adopted because hallucinations are essentially errors where the model guesses or makes up things and this actually reduces the ability for models to be used in certain applications where it would be rather effective but hallucinations are errors and in some Industries you can't really have any errors or the error rate needs to be so low that these models just can't be used in the workplace because a high degree of reliability is completely needed now what's also fascinating about

### [18:58](https://www.youtube.com/watch?v=foHjSo0kswU&t=1138s) National security demos

this model is the fact that they actually also showed this model to National Security officials you can see right here that it says earlier this month CEO Sam mman tweeted an image of strawberries you know without Fanning the Flames of speculation about an upcoming release but they also gave demonstrations of strawberry to National Security officials this summer set a person with direct knowledge of those me so we can kind of start to gauge the picture of how good this strawberry model is because if this model is one that is a able to generate enough high quality training data for the smaller models like Orion and if they are saying that look this is the model that led to this entire kafuffle at open Ai and if they're saying that now that this model has been shown to National Security officials a person with direct knowledge of those meetings said then that definitely means that this strawberry model is one that is truly Advanced and this truly makes me wonder about the capabilities of this model because it seems that this model is so Advanced that they're now taking certain precautions now there is the other argument which is of course this might not be true we could have Sam Alman taking a different approach considering the fact that there were departures of several safety leaders earlier this year some of whom claimed that outman didn't care much about safeguarding the tech as they did you can see here that it also states that by demonstrating an unreleased technology to government officials open AI could be setting a new standard for AI developers especially as advanced AI increasingly becomes a national security concern so I've always said that government intervention was always going to happen and this is also something that Leopold Ashen brener has said would eventually happen to AI labs this is something that

### [20:50](https://www.youtube.com/watch?v=foHjSo0kswU&t=1250s) Math applications

he said SLS spoke about in his PDF document Ai and the decade ahead what was also interesting is that I didn't see this exactly but it said that Sam Alman stated that we feel like we have enough data for this next model Sam Alman apparently said at an event in May which was likely referring to Orion and he says they've done all sorts of experiments including generating synthetic data now we actually also look at why this would be a lucrative application because AI that solves tough math problems could be a potentially lucrative application given the existing AI isn't really great at math heavy fields such as Aerospace and structural engineering and it's a goal that has tripped up AI researchers who have found that conversational AI chat gbt and it's a goal that has tripped up AI researchers who found that these large language models like chat gbt are prone to giving wrong answers that would flun any math students and improvements in mathematical reasoning could help AI models reason better about conversational queries such as customer service requests now what's also interesting was that this article showed us these other models that are also working on their reasoning engines we've got Google Deep Mind who are working on developing air models to solve math and geometry problems we've got anthropic who are highlighted through their reasoning capabilities and interpreting charts and graphs in their latest model claw 3. 5 Sonic which is just completely remarkable in terms of how it performs you've got these other companies here like andb that are mainly focused on developing AI agents to code and complete tasks and of course we've got

### [22:26](https://www.youtube.com/watch?v=foHjSo0kswU&t=1346s) Competitor reasoning engines

cognition Labs which is developing AI agents code and of course magic which is doing the exact same so there is clearly a competitive arena for these large language models to compete but it's rather interesting to see which company is going to come out on top now of course you can see that this article also speaks about how to improve the model's reasoning some startups have been using a cheap hack that involve breaking down a problem into smaller steps and though the workarounds are slow and expensive now essentially what they're referring to here is the other method that people currently use which is reflection so reflection is basically the process of where you have an llm generate a response and then you ask it to criticize that know basically give feedback to itself so says for example after a customer asks an AI app to draft the affir mentioned blog post the app could automatically trigger extra queries the customer wouldn't see such as asking the llm behind the app to rate how well it did and where it could improve and this is the Socratic method of teaching a student to think critically about their belief or arguments and this genuinely leads to better responses so it's

### [23:38](https://www.youtube.com/watch?v=foHjSo0kswU&t=1418s) Sampling and searching

something that you could always try with your models now another thing that these models do in terms of their reasoning capabilities and we don't know what method strawberry uses but we'll get on to that in a moment but it says if a developer wants to take a page from Google's book they might try a technique called sampling and sampling SL search is really effective and this is something that has led to superhuman AI systems so this is where during sampling a developer increases an lm's ability to generate creative and random Answers by asking the same questions dozens or even a 100 times and then picking the best answer so it's like you know for every answer considering that you know these models aren't completely accurate eventually sometimes they do get it right and then what you try to do is pick the reason reing steps or the answer that seems to fit what you would want to see for example it says a coding assistant app might ask an LM for 100 different answers to the same coding problem then the app could run all those pieces of code then see which ones produce the correct answer and automatically select the most conside codes add the final answer so this is how some of these systems work and you can see right here that this is actually how alpha code 2 Works which is you know better than 85% off Pro programmers so basically it's sample generation it generates up to a million diverse code examples and of course you know most of them are going to be worthless and then it just basically selects the one that are the best now I do remember that there was some discussion basically stating that whilst yes this could lead to superhuman systems the problem with this is that when you actually look at how humans do this humans don't search over that many spaces in order to get to the answer like if you look at how alpha go actually won it was searching over many different opportunities and different moves but if we look at Lisa doll he's a human and the human mind we know was only searching around between 50 to 100 different solutions whereas Alpha go was searching through I'm not sure if it with thousands or Millions but it was certainly like a lot more and

### [25:47](https://www.youtube.com/watch?v=foHjSo0kswU&t=1547s) AI vs human reasoning

there is this discussion that whilst yes you know looking and reasoning through generating a million different diverse code samples and then just running the one that works this does work but there is clearly something about humans something about the way that humans reason through problems that is just way more efficient than these computers and if we can find a way to actually get these computers to truly reason like humans do in the sense that they only need to search through maybe 50 maybe not 50,000 then we could get models that are literally just 10 times smarter than us because you know we're not just sting through a million different solutions and seeing which ones work we use what humans do and I'm talking about really smart humans and then use those outputs so you can see right here that this article actually also talks about what Ilia saw you can see here it says that strawberry has its roots in research it was started Years Ago by Ilia satova then open ai's Chief scientist and he recently left to start a competing AI lab which is safe super intelligence and

### [26:50](https://www.youtube.com/watch?v=foHjSo0kswU&t=1610s) Sutskever's research

before Ilia satsa left Opening Our researchers jackob puki and Simon Tor built on ssa's work by developing a new math model called qar and this was where we got the first inklings of something crazy going on at open aai now this breakthrough and safety conflicts of course this came just before the board led by Leo satov fired Sam mman before quickly rehiring him and this is where last year leading up to qar open AI researchers developed a variation of a concept known as test time computation and this is where you boost lm's problem solving capabilities where you potentially give the model more time to consider all parts of a problem after you've asked the model to execute and at that time elas sov published a blog post related to this work so basically test time compute is something that is rather effective and this is where you make models instead of making them bigger you essentially make it think harder when it's trying to solve a problem and this paper I'm going to show you guys from Google so this one right here was where they tried two approaches and they basically first let the AI revise its answers kind of like how a student might check their work then B used a separate AI to judge which answers look promising and then explore those more and they found that different strategies work better for different kinds of problems and easy problems need one approach and hard problems need another and by choosing the right strategy for each problem they were able to get better and better results whilst using less computer power and for easier problems

### [28:22](https://www.youtube.com/watch?v=foHjSo0kswU&t=1702s) Process supervision

this think harder approach sometimes worked better than just using a much larger Ai and here's where we can see the research that they were talking about that was published the last year you can see May 31st 2023 improving mathematical reasoning with process supervision you see it says we've trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of the reasoning process and this is where you call it process supervision instead of rewarding the correct final answer so it's basically just focusing on the next step and as long as you get the next step right then that is where you know you're award so it said in addition to boosting performance relative to outcome supervision process supervision also has an important alignment benefit it directly trains the model to produce Chain of Thought that is endorsed by humans so I'm wondering that if this is what qar is the strawberry model is because we can see here that when we look at the math problems and you know when we spoke about how qy was able to solve math problems and how it was able to do that rather effectively maybe this is where you know research is in the sense that they built something else on top of this that was a lot more effective because what we can see here is that process supervised revision um actually performed increasingly well and you can see that over time of you know as there were more and more samples this test performance was going up and up so it seems like you know research was probably built on top of this and this was able to make the model just a lot more effective it does say here though and I think this is why probably we haven't you know seen the model released yet but it says it's unknown how broad Broly these results will generalize beyond the domain of math and we consider it important for the future work to explore the impact of process Supervision in other domains and if these results generalize we may find that process supervision gives us the best of both worlds a method that is more performant and more aligned than outcome supervision so this is rather fascinating because this of course you can see all the authors you can see Janik and IIA satova and you know I can't believe I didn't see this article before because when you look at qar this is something that was of course a small model that was able to you know solve math problems really well but I'm guessing that this process supervision was something that is related to test time compute and the other research regarding strawberry let's just do a conclusion so to conclude strawberry which was formerly qar is a new AI model focused on improved reasoning capabilities can solve math problems and complex word puzzles this might be integrated into chat GPT later this fall and is currently being used to generate training data for other models well it

### [30:57](https://www.youtube.com/watch?v=foHjSo0kswU&t=1857s) Conclusion and summary

might be now Rion okay is the next major model which doesn't reference GPT 5 and this aims to improve upon GPT 4 and it's currently in develop and it may use generated data from Strawberry in order for its training now essentially we have a situation on our hands open AI is working on multiple AI models to stay ahead of competitors strawberry is focusing on the reasoning and Orion's meant to be the next Flagship model after GPT 4 or I'm not sure if it's going to be GPT 5 which could be a completely different model and of course we do know that there are currently two versions of strawberry the distilled version which could be applied to chat GPT and of course we do have the raw version which is currently excelling at reasoning and other abilities which could be used to train models like Orion so this is going to be completely fascinating to see where open AI is it seems that open AI are just so far ahead in terms of the competition if they've got gp5 they've got Orion they've got strawberry distilled they've got the original strawberry and they've got all of these other things in the work so it's going to be kind of interesting to see what opening ey has up their sleeve and if you enjoyed this video I'll see you in the next one

---
*Источник: https://ekstraktznaniy.ru/video/14103*