# Meta STRIKES AGAIN! New AI DEVICE, Microsofts NEW Model PHI-3, Adobe Firefly 3 STUNS! And More

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=0jLSL0QqsjI
- **Дата:** 24.04.2024
- **Длительность:** 27:06
- **Просмотры:** 32,717
- **Источник:** https://ekstraktznaniy.ru/video/14371

## Описание

How To Not Be Replaced By AGI https://youtu.be/AiDR2aMye5M
Stay Up To Date With AI Job Market - https://www.youtube.com/@UCSPkiRjFYpz-8DY-aF_1wRg 
AI Tutorials - https://www.youtube.com/@TheAIGRIDAcademy/ 

🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

Links From Todays Video:
Links From Todays Video:
https://build.nvidia.com/microsoft/phi-3-mini
https://twitter.com/DThompsonDev/status/1782446072452780347
https://twitter.com/emollick/status/1782906182312529935/photo/1
https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential/?ocid=FY24_soc_omc_br_x_Phi3
https://twitter.com/arankomatsuzaki/status/1782594659761389655
https://arxiv.org/pdf/2404.14219.pdf
https://twitter.com/ygrowthco/status/1782493076373885336
https://twitter.com/TolgaBilge_/status/1782828311766265898
https://twitter.com/tsarnick/status/1782872540463194509
https://twitter.com/tsarnick/status/1782559436055433237
https://twitter.co

## Транскрипт

### Segment 1 (00:00 - 05:00) []

so with another crazy day in artificial intelligence there were some key stories that many people did miss that you're going to want to see so without further Ado let's take a look at some of the most pressing stories so meta's smart glasses the rayb bands actually now have ai in them if you aren't familiar with what these glasses are they're basically like what the Google Lens was supposed to be and if you don't know what that is it's basically glasses that have a camera embedded in them that can take high quality pictures and you can do a whole loads of cool stuff with it now I think this was a very obvious step for meta considering the fact that they just released their new AI tool and I'm truly excited for this update because I think this actually goes to show what is going to be coming in the future with AI devices now I know a lot of people have been skeptical to try out these glasses and I have actually tried them out myself and I have to say they don't disappoint I mean when we look at generative Ai and just a personal assistant we actually wonder what the form factor is going to be and one of the key things I've noticed with the metag glasses is that they aren't actually weird they aren't actually awkward they fit pretty well and someone who's seen them on other people as well they don't look like something that's out of place and the best thing about them is that you don't actually have to get used to wearing a new and different product this is something that people already use of course some people do use sunglasses more in hot countries but you can actually get prescription lenses fitted so these can be your everyday glasses now they didn't have the AI update before today so if you do have a pair you can actually access this update but it is important to note that currently this is only in early preview so if you're like me and you do have the glasses and you're in a country like the UK they haven't actually rolled it out worldwi yet but I'm guessing that when it is rolled out completely this is going to truly change the game because like I said before the glasses were already pretty decent but this is arguably going to be one of the easiest devices is for the average person to use in terms of actually getting to grips with an AI system that's non-invasive and all that means is that this is just naturally going to be there so you'll basically just have glasses that potentially in the future will always have an AI in them and maybe a camera is just going to be that something that people get so used to that it's something that's now expected now I think this will really take off in the next 2 to 3 years when we do get AI systems that have much lower latency and the quality completely increases for example when maybe some influencers start to take video calls or they start to record certain pieces of content I think that is when the spiral up is going to truly be there because as you know social media virality trends all it takes is one big Trend and then we have thousands and thousands of people and even millions and millions of people getting in on this new AI Trend and I think the only thing that's currently stopping widespread adoption of this is of course the latency between talking to an AI system and actually getting that response and I do think like I said before within the next 3 years that's not going to be a problem at all and once that isn't a problem I'm pretty sure a lot of people are going to be utilizing this and it does show an interesting Trend because maybe this could be the future of AI form factor and other companies like open Ai and Humane might actually switch their form factor to compete with what's meta doing and I definitely do have to say that putting glasses with a camera and all sorts of technology that they do have inside is definitely a pretty hard thing to do now when actually talking about AI devices something that you should all know is that rabbit actually recently had their pickup party in New York and this is the first live unboxing of the rabbit R1 device now why am I even talking about this well number one it's quite similar to The Meta AI glasses in the sense that it's an agentic AI platform and the fact is that right here we have a Monumental moment because it's a landmark event where we actually have an AI device that many people are actually excited to use now I think this is going to be truly interesting over the next couple of days because that's when this thing is going to start getting into the hands of different Tech reviewers and I think that's when we can start to see The Wider reviews of this especially compared to human's recent aiin which was honestly just a blunder in the tech industry because I don't think they listened to initial reviews of what people were stating about the device and then of course it led up to other larger creators stating that they just think that this device has a lot of potential but at the current price point it just simply isn't worth it now the reason I'm actually talking about this device again is because the live demo actually showcases what this can really do and because it's live you can't really fake it and you can see all the flaws of this software now I'm going to play a couple of minutes of this because honestly like genuinely I was actually surprised at what this device could do because we all know that when there's a product demo from even an AI system to a simple AI pin or device or whatever it may be we're always usually skeptical because we think that they just want to sell us their product and then by the

### Segment 2 (05:00 - 10:00) [5:00]

time we order it it's not going to live up to the hype but honestly a live demo where he's showcasing all of the abilities of this system do honestly surprise me in the fact that it does live up to the hype in terms of what it's claimed to be check this out so now the vision camera or the camera works with analog scre wheels too so I can just literally do that it flip back and forth try one more time it flips okay so let's just do that what's in front of you let me see one I'm counting that's sub two seconds after let me see okay that's pretty fast or should I say this is the fastest all there guys yeah all right so I post all my demos on Discord all the time you guys know that right and love thank you it it's been like I I think must over a 100 demos I I posted during the last 1004 days but I saved one for you guys for today okay just for today this is a simple spreadsheet that my colleagues hand drawn and I come on I can do better than that you guys know that right again I want to show you something much more interesting okay so let's just point right into this so let's do something like for this spreadsheet transcribe it and swap the color and number column taking a look now you don't need to point it by the way your for the attachment wait what there's an email guys check your check your watch okay I want to make sure this is not a Spam cuz it's called rabbit we got to spam a lot um okay I'm going to put this down for a bit it looks like it says here's a revised spreadsheet you asked for and it came with a little nice attachment in a csw file let me just download it guys our one is real so I mean this isn't a paid promotion or anything I think that this actually goes ahead and shows us where we're headed in terms of agentic AI and this is just an on device AI that is simply just taking the inputs of what you're looking at and then it's doing something really quickly and honestly doing that email like within I think it was around 6 to 5 Seconds that is pretty pretty quickly like I actually can see multiple use cases for that so I got to be honest they've done something pretty cool here now what this also gets me excited for is what probably open ey is building because as you all know open ey is literally like a year two years ahead of where any other company is so it seems that maybe the industry in some areas are quite further along with some companies than others so I'm pretty excited for this and I'm honestly wondering what the reviews are going to be like in the coming days to see if they're as positive as this live one is next in AI news we actually had openi finally released some research surrounding the security and safety of AI models and this one isn't as exciting but it still is pretty interesting because we do have a scenario where large language models and these AI systems are consistently being jailbroken through prompt injections and other attacks that allow people to Simply bypass the restrictions and essentially the paper is called instruction hierarchy training llms to prioritize privileged instructions and essentially it explores the issue where as long as you have an input to the AI system it's eventually going to read that and use that now of course the problem is that the llm systems actually treat all types of input for example the system messages from developers the user messages and thirdparty content with equal priority which make them successible to malicious prompts and to counter this the auers actually propos an instruction hierarchy for llms which are essentially a framework where system messages have the highest priority followed by user messages and then the third party content so this hierarchy is designed to guide llms when conflicting instructions are present prioritizing higher level directives or disregarding or refusing to comply with lower priority potentially harmful instructions and this paper introduces a method for automated data generation to train llms on this hierarchical instruction following Behavior by simulating different types of attacks and training the models to respond appropriately ignoring lower priority malicious instructions the approach aims to significantly enhance the robustness of llms without actually sacrificing their General capabilities and evaluation results actually suggest that the models trained with this method are more robust against various types of unseen attacks indicating improved safety and reliability in real world applications but I'm going to say what everyone's thinking um open a ey can you please finally release your updated model because I'm sure the entire industry is

### Segment 3 (10:00 - 15:00) [10:00]

waiting for whatever it is that you have up your sleeves now Adobe actually finally released their Firefly 3 model and if you don't know Adobe have a model that is quite similar to Mid journey and other image generation capabilities but the problem was with that model is that it wasn't that good so many people didn't use it and considering the fact that it's baked into the Creative Suite of tools this was something that definitely needed an improvement now finally they've updated this model to version 3 which actually looks very effective and you can create higher quality images photo realistic details improved the mood improved lighting you can also expand images and people have been doing some very interesting demos with this stuff there's a Twitter thread here that explains exactly what's going on you can see a wildlife photography shop which looks stunningly realistic I would argue that I still don't believe that this is AI generated even though it completely is and then of course compared to Mid Journey these still all look real to me I know I'm living in a delusion land where I think those are still taken by a photographer but I guess my brain can't handle the fact that this is AI generated now of course you can see they also released this video where you can see all of the small updates that they've made to the platform this update I think is rather important because I use this from time to explore different creative ideas from many different purposes and I think having a model that can actually generate stuff that does really work is rather effective because I think like I've said before whilst yes generative AI is good when you can build something that's rather effective but one thing that many people are missing and you're probably missing yourself is that when they build these AI tools it's only going to be used if it's actually available to the public in a format that they are used to so for example with AI generated images now that it's an adobe Firefly we're definitely going to see rapid adoption of this because it's an inform factor that people and creatives are very used to and it's easier to use and like I said before even when they release text tools that beat things on the benchmarks a plus two or three in the MML U isn't that crazy if people can't access your model in a nice clean user interface that is very easy to use and this is why I'm saying that we're about to start to see over the next two years I think a more widespread adoption of these Technologies as they get baked into more familiar tools that many people use on a day-to-day basis and if you wanted the comparisons between Firefly V3 and mid Journey V6 you can see the comparisons on these different models and I think Firefly V3 in these prompts right here you can definitely see the improvements although yes mid Journey does take the cake in terms of photo realism I think that firefi 3 is definitely a big Improvement on what we've seen before now I'm not including this to be an AI Doomer or someone like that but I actually do believe that contrasting opinions are always important because they help the conversation progress because if everyone thinks that AI is this magical tool that has no downsides then I don't think anyone's thinking at all and I think whilst yes the majority of the points in this video I disagree with I think it's still important to have that as part of the conversation and essentially if you don't know who Gary Mark is he's a notorious I would say AI critic that tries to point out all the flaws with llms and in some cases he does keep people grounded but a lot of the time I would say that it is a bit nitpicky to just look at only the flaws of these LMS and completely disregard the breathtaking capabilities that gp4 and other state-of-the-art models have given us so just take a look at this video and then I'm just going to talk even further about this you have some people that I think are more gullible that want to believe for whatever reason that this AI is great and they notice the successes they don't notice the failures well a scientist says you need a numerator and a denominator here you need to know what percentage it's correct you also need to know when it's wrong how bad are those errors going to be and so the it is like everybody got super excited last year but we are running out of improvements at least for a little while things are slowing down and if you look at most businesses they all tried it out last year everybody subscribed to the hype but most people are like well we're doing a pilot experiment right now we're seeing if it works maybe we're running three pilot experiments but you see very few people saying yes we use this every day and it's absolutely essential for what we do the only people like that really are programmers and there's some evidence that it's making the quality of their code worse so you can write faster but sometimes they're errors and they're hard to debug and so forth so we're still I think aways from having the Practical AI that we're now dreaming of like last year was amazing it brought AI to the public people realized hey this might really help me but at the same time I think it was premature and that right now the tools that we have are limited in what they can actually do and we're kind of like fantasizing about some future world but the tools we don't have or the tools we have right

### Segment 4 (15:00 - 20:00) [15:00]

now are limited you and I think part of this is correct in the sense that like yes the tools we have right now are limited but I think that's all going to change in the next 12 months so I would say just put a pin in this idea for now for those of you who are you know wanting to contend with this opinion because as we've seen before air development is one of the fastest moving fields and if it's anything like we've seen before and as we've taken a look at many different statements you know regarding GPT 4 GPT 5 uh Frontier models and what people are working on I think it's probably going to be an even more shocking 12 months in terms of the AI acceleration because we now know that there are three Frontier Labs just all going at it in terms of trying to maintain the front spot this is what a lot of the research that's focused on synthetic data is focused on right um so if you if you don't do this well you don't get much more than you started with um but it actually is possible by injecting very small amounts of new information to get more than you started with if we go back to systems of eight years ago so if you remember alphago which is a system that was used to play Go note that the model there just trains against itself with nothing other than the rules of Go to adjudicate and those little rules of Go that little additional piece of information is enough to take the model from no ability at all to smarter than the best human at go um and so if you do it right with just a little bit of additional information you I think it may be possible to get infinite data generation engine this is what a lot of the research and I think that uh concept is particularly fascinating because being able to get an infinite data generation engine uh could definitely lead to some kind of you know self-improvement feedback loop where we get an AI that generates synthetic data we feed that data back in as long as it's verified and checked and then of course um you know there are trading runs and stuff like that do take a lot of time but you know that would just be I guess you could say exacerbated by the training run time but of course over time that time would continue to collapse as the GPU efficiency increases and as we get bigger and bigger data centers that could handle more compute so I think uh this interview is actually insightful because it does cover quite a lot of stuff he talks about the next generation of AI models and where we're really going to go and from actually paying attention to Dario amod in his interviews I think it's clear that there's still a decent amount more that we can squeeze out of these models in terms of capabilities I mean we still haven't cracked personalization long-term memory we still haven't cracked AI agents in the sense that they're able to go out and perform a task very effectively I know we recently just saw rabbit do that but I still think that there's a whole new area where once it's truly explored I think it's going to be the next level of AI that people are truly surprised by models is completely crazy mhm my good friend yanan thinks it's the right thing to do um he thinks we're all going to be fine we'll keep in control of these things they won't won have any goals of their own um they won't have any desires of their own I think it's very dangerous to open source them I think it's like open sourcing nuclear weapons so I think I would love governments to forbid companies to open source big models so the clip that you just watched there was of Jeffrey Hinton talking about Yan Lin and open source Ai and I got to say this clip and open source AI has received so much backlash now I agree with this clip for several reasons but I'm going to explain what most people just I think are missing the point here and the point here is that he's not stating that an open- Source ecosystem is bad that's not what he's stating at all and I've seen a lot of the replies to this tweet stating that you know he's a diesel he's you know awful you know he should do this or whatever and it's just like all he's saying is that when AI systems get to the point where they could literally destroy the entire planet and companies have that okay they've built a model that could you know do a catastrophic effect it could teach an average person to how to you know from scratch you know go to Walmart go to wherever build a biological agent or do something that could cause the most maximum amount of damage possible okay he's just saying that when a system gets to that capable level or when it's autonomous or where it's able to you know acquire resources in the Physical Realm he's saying when it gets to that level you just probably shouldn't open source the AI now I think that makes sense because if that is open source guys I have no idea what kind of world we're going to be living in because trust me whilst yes Society seems relatively safe now and it's been getting more safer as time goes on and I know that doesn't seem like it because the news puts on everything awful going on in the world that's how they get clicks but the point here is that we're living in the safest time in history and I think people forget that like if we have a completely open-source model that is capable of doing absolutely anything it only takes a few hundred people to completely upend society and I know you might be thinking okay we'll have AIS defending that I think at the level where AIS could completely cause a catastrophe I just think that that's

### Segment 5 (20:00 - 25:00) [20:00]

completely irresponsible now if they're going to open source something that is on the level of GPT 4 or something that's on a standard level that helps developers and helps people build whatever applications they want to build I think that's completely fine but at that level where it's at level four risk these companies are simply Never Going To open- Source them yet even release them opening I have previously stated on their blogs that they're literally never going to release a system that could help people develop novel biological agents they literally stated that in their paper and it's something that I've read so him stating this is just him stating the obvious he's just stating it out in the open so I'm not sure why there's such a backlash now there was this tweet that actually does you know dive into it and it completely dives into some of the reasons it says model developers and providers are able to find you their models and Implement security guard rails on the API to avoid producing dangerous outputs but with open source models these guard rails are simply not present or can be trivially removed and there's no known method to ensure that these models once released are not able to be further fine-tuned to produce bad outputs in other word once the model's weights are released there's no way to ensure that it cannot be used for Dangerous purposes and remember guys whilst yes you might live in a society that is completely safe and you're not going to do anything bad with it maybe you just want to use it for yourself there are tons and tons of people out there that will use much more advanced sophisticated systems than the one we have today to do unspeakable thing I mean do you guys really want another Co situation and I'm sure that many people were affected by that situation so negatively and it was something that was so hard to contain so we do have to think about the actual real world scenarios because it only takes one bad scenario for everything to be completely awful and whilst yes people are like oh we should open source it I think you don't want that because like I said before if they have like it like let's say for example gpta and let's say they open source gpta which is never going to happen okay but let's say they did okay and something awful happened if that actually happened that would be completely worse because once something awful happens then that sets the precedent for hey hey we need to be you know much more stringent on what kinds of AI systems can be built how many chips are going to be there um and I've got a video coming on how certain bills that are being proposed could potentially change the landscape so I don't think what he's suggesting is completely awful I don't want random people to be able to develop novel biological agents yes I'm trying to support a healthy developer ecosystem but I think with systems like gpt7 and GPT 8 I don't think it makes sense for those companies to open source that technology but I do think it makes sense for them to open source the research where they're talking about preventing those systems from doing any harm and I think those are two different completely distinctions it's like this don't actually help the discussion because he's stating that Regulators to ban open source AI models is just going to get him backlashed but that's not what he's stating he's just stating that advanced AI models that can do anything shouldn't be able to do that so I think it's important to always make that distinction because if there's one thing you have to remember about being on platforms like Twitter and on the Internet it's that bad news always spread faster than good news now something that was pretty crazy was Microsoft released 5 3 now if you don't know about the F series it's essentially really tiny models that Microsoft have been working on and essentially these models are very effective of what they do and they've also been focusing on really high quality synthetic data and so far we've seen that these models are surpassing things in every Benchmark at similar sizes and the reason that this is actually so surprising is because it was only a couple of days ago that we got llama 3 at 8 billion parameters and we can see that 53 Mini at 3. 8 billion parameters is actually doing better in the MML U and in the H swag and that's a model that is 3. 8 billion parameters which is pretty much half the size and you can see at the GSM AK it's doing better um and this is pretty crazy okay like this right here is I guess you could argue a GPT 3. 5 level model but it's 3. 8 billion parameters um and the 7 billion parameter model um is pretty much on par gbt 3. 5 which means that we could potentially in a couple of months you know have gbt 3. 5 or even close to gbt 4 level performance on our phones and that's going to be something that as we continue to see with the Five series be continually increased and honestly guys this is something that surprised even me because I knew 53 was going to be even better but these benchmarks are really incredible especially when it was only a couple of days ago that llama stole the show in terms of what was possible Ethan mik did actually do some demos of fi you can see that it's able to do some really interesting things he says give me a Porters fly forces analysis of a company that teleports cheese into your mouth and then you can see it says the threat of new entrance so here's Porter's five forces it's essentially something that you use to analyze a businesses viability when entering a certain Market you can see it says the threat of new entrance into the cheese teleportation industry is relatively low due to high barriers to entry the technology required for teleporting cheese is complex and expensive requiring significant ific investment into research and development as well as

### Segment 6 (25:00 - 27:00) [25:00]

patents to protect intellectual property Additionally the regulatory environment y y the point is that surprisingly having a model that's so tiny be able to give you know really concise and insightful answers is going to present a really interesting future for where we can have almost instant responses and insightful responses streamed instantly to us for a vast amount of different situations now whilst I do actually talk about the post labor economy quite a lot on the second Channel which is essentially where I talk about how the future of AI is going to upend the economy in terms of taking a large majority of the jobs um I think situations like this it does make sense number one because I think the people that do this kind of hate this although yes you're interacting with customers I've seen a million different videos on Tik Tok where people are just simply getting videos taken out of them like people are trying to do pranks for Tik Tok and stuff like that and I think it's pretty disheartening to see that because as someone just trying to do their job it must be awful to have that happen to you where you're just trying to do a job and then someone's literally just trying to make a joke out of you at work and the reason I think this is also really cool as well is because certain people don't speak all languages so this definitely is an access thing whereby different people from different countries are able to easily access this technology I mean imagine you're on holiday and you are talking to an AI system that instantly translate your language and can instantly understand your order which is truly going to break down the barriers in terms of being able to connect with other cultures and people so just take a look at this welcome to Wendy's what would you like can I have a chocolate frosty which size for the chocolate frosty medium can I get you anything else today no thank you great please pull up to the next window
