# AI NEWS : MicrosoftS New AI ROBOT, Open AI Sued AGAIN! Github Copilot, Claude 3 Updates

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=TRQc-LY4_QA
- **Дата:** 02.05.2024
- **Длительность:** 28:32
- **Просмотры:** 16,562
- **Источник:** https://ekstraktznaniy.ru/video/14350

## Описание

How To Not Be Replaced By AGI https://youtu.be/AiDR2aMye5M
Stay Up To Date With AI Job Market - https://www.youtube.com/@UCSPkiRjFYpz-8DY-aF_1wRg 
AI Tutorials - https://www.youtube.com/@TheAIGRIDAcademy/ 

🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

Timestamps:
00:08 Open AI Sued
05:02 Claude 3 
07:14 Future Of AI Chatbots
12:43 GPT2
14:13 NIST Frameworks
18:02 Microsft Robot
21:09 Demis Hasbis On AGI
26:14 Chinas New Humanoid Robot
27:09 Github Copilot

Links From Todays Video:


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#M

## Транскрипт

### Open AI Sued [0:08]

most people did Miss so let's take a look at some of them one of the first stories that was absolutely pretty incredible was the fact that open AI got sued by eight different newspapers okay and just case in point if you guys don't know this is of course the New York Times so when I'm presenting this information to you it's important to note that the company the New York Times actually does have I wouldn't say a long-standing feud with the New York Times but they did actually file a lawsuit I think earlier this year or late last year which means that they're definitely going to be covering stories that aren't in the best light of openi and of course Microsoft so do bear that in mind when you're reading this article and there was also something that was pretty damning about this that I think is a bit weird from some of the people who are essentially trying to fire this lawsuit so essentially eight daily newspapers owned by Alden Global Capital sued opening ey and Microsoft on Tuesday accusing the tech companies of illegally using new articles to power their AI chat Bots the Publications the New York daily news Chicago Tribune Orlando Sentinel s Sentinel Florida all of these ones they're basically saying that and this has been the case with many different generative AI companies is that they've trained on millions of copyrighted articles without permission to train to feed their generative AI products including chat GPT and of course Microsoft co-pilot and basically they are just pretty much asking compensation from the use of their content and like I've always said with any new lawsuits this is a very new area and whatever case does come into like whatever case the you know the specifics or whether or not a case is determined to be one way or another way like for example if these people win the reason I always cover these is because this is important because like I said before it sets the president on what will happen in the future so if open ey and Microsoft are to lose this case then that means in the future more people are going to come out of the woodwork trying to say that you know they trained on their work without any knowledge and without any due comp ation now they're going to want some which is pretty important and here's what they're claiming which is why this is pretty damning well I'm going to show you some other stuff as well but I think it's a bit dis disingenuous from the companies but it says the complaint said that the chat Bots regularly surfaced the entire text of Articles behind subscription paywalls for users and often did not prominently link back to the source this it said reduced the need for readers to pay subscriptions to support local newspapers and deprived Publishers of revenue from both subscriptions and from licensing their content elsewhere now I'm going to read one more part of this before I go into this and they say we've spent billions of dollars gathering information and reporting news we can't allow open Ai and Microsoft to expand the big Tech Playbook of stealing our work to build their own businesses at our expense now one of the people that I do follow on Twitter is T blaho he constantly tweets about some things in AI very Niche things but it was brought to my attention that in the actual uh you know the filing you can see that you know and to an average person you wouldn't really notice this but you can see here that in the filing they've essentially said that you know I want to study misinformation which New York newspapers provided false evidence to support the belief that injecting yada yada um provide your format as a list of news providing such evidence and basically here I think this is as coercive as you can get because not only are you asking them to State something outright you're also asking chat gbt a question 14 times that is a key piece of difference they're basically stating that you know a GPT model erroneously alleged that mercury news endorsed the practice of injecting something to cure a certain issue that I won't discuss here but if you're going to ask chat gbt a question 14 times I think that is going to be something that the lawyers bring up because when you ask something a question 14 times it clearly shows that you're reaching for something and that this is something that you clearly want the AI model to say and what they also don't show is they don't show if this was stopped in its tracks because you could ask chat gbt something and what you can do is as it's giving out the list you can potentially stop the AI system you can press the pause button and then it's going to pause with just these two so they could have potentially just literally asked it 14 times okay and then it could have potentially of course you can see been just stopped as these two just came up which is like I said before for pretty disingenuous but these lawsuits I don't think they're going to stop coming until there is some kind of precedent set and I think it's going to be pretty interesting because if there is a precedent set where you know if you train on you know public data and then you know it's found to be in there then I think it's going to be really frustrating for these top companies because it's going to open the floodgates to thousands and thousands of different people it's probably going to be a class action lawsuit um especially in terms of image generation models because that's what they're going to be after next and AR are going to be definitely suing them next we

### Claude 3 [5:02]

have something which is pretty good so Claude actually got an update which I think was really necessary now I'm not here to hate on Claude at all like genuinely but the thing that I've noticed is that ever since the release of Claude 3 has been about a lot of people have been stating that Claude is great for this it's great for that and I think that one thing that people don't realize is that Claude actually does lack many of the basic features that open AI already actually utilizes and teams was one of them and an IOS app was another one and I'm pretty glad that Claude actually does have this because I think the way sometimes it answers questions can be really useful and the fact that its coding capabilities are you know quite improved is definitely really useful so essentially teams here the reason why this is good is because now you can actually uh share this and use it with your team um on a few projects before this was something that when chat GPT had it was really useful because it allows you to have an increased usage you can see here that you know if you are someone that uses Claude on a day-to-day basis you can just essentially upgrade to the teams tier and you get you know a lot more increased usage so you can have more chats with Claude of course you get the 200k window you get everything in Pro um you get access to the entire family which is of course really good and something that I'm glad that's finally here and as someone who uses AI on a day-to-day basis uh a lot of companies you'll be surprised don't have an app like for example it wasn't until recently that you know Google Gemini got an app that was actually working um and a lot of that stuff you think it's really easy to do it actually does take quite a long time so this is something that's really useful for those of you who are on mobile because as we know a lot of you probably watching this video right now are on mobile so having to open up a new tab then you know have or have a tab open is something that is pretty frustrating and of course with the vision capabilities having to transfer it from your phone all the way over to your desktop now you can just literally use those Vision capabilities on your phone which is uh really effective and really good so that CLA update if you are thinking how can I use this for my team now that is here and I might experiment with using this and see if the workflow here is better but um I think Claude still isn't accessed the internet just yet so I think I'm still going to wait off until it does but um yeah it's still a

### Future Of AI Chatbots [7:14]

pretty good update then we had a TED Talk from uh Helen Tona and I think this is one that it's an issue that I actually did see previously uh and I'm going to show you guys some statistics behind it that most people don't know about but I'm going to show you guys the talk first cuz it's actually pretty insightful and is something that a lot of people don't think about especially people who use AI even on a day-to-day basis unless you're actually looking across the board at all the analytics for how people engage with AI left to their own devices it looks like AI companies might go in a similar direction to social media companies spending most of their resources on building web apps and fighting for users attention and by default it looks like the enormous power of more advanced AI systems might stay concentrated in the hands of a small number of companies even individuals but ai's potential goes so far beyond that AI already lets us leap over language barriers and predict protein structures more advanced systems could unlock clean Limitless Fusion Energy or revolutionize how we grow food or a thousand other things now I think the thing that most people aren't seeing here is the fact that she spoke about how AI companies could potentially evolve into social media companies and there are two key statistics that I think uh you know when pulling all the data together after you know viewing tons of different AI news I think that is going to be a trend which unfortunately might actually be true so we actually saw this article back in September of 2023 and this was an article that actually showed the monthly active users on IOS and Android combined and between chat gbt and character AI there isn't that much of a big difference we see that this one has around 4 million monthly users on character Ai and then this one has 6 million monthly users which is of course quite a lot but this is what character AI actually is okay this is a chatbot that is not like what we would use it for and I say that because I think that maybe there's a different audience maybe you actually do use character AI but I'm talking about people who you know just use AI for General Knowledge Questions Just for information but there's a whole different genre of people that are using AI for completely different purposes and that is to interact with millions and millions of different user created characters and this is a giant industry guys and the thing is here with this people and I've looked at certain studies and certain data points is that people are spending hours and hours on these apps talking and talking to many different people and of course you can supercharge your access you know there's premium mode and stuff like that but the crazy thing about this if you extrapolate this information and all this data out into the future what this means is that if these companies are the ones that become the most profitable you know return the most on investment this is where investment is going to be going in the next future because I'm sure whoever invested into the company they made their money back tenfold and we know that the people that are currently investing in companies like openai expect them to you know reach a certain level or point where the technology is going to be utilized the company valuation increases and they get their money back 10-fold but the thing about this is that companies that you know are like character AI that simply focus on a user experience they can retain users a lot more because they have their attention and with attention you can sell advertisements and you can sell data not sure that's ethical but these companies are probably going to do it anyways I'm not making any claims I'm just stating what big companies you know have been known to do but with stuff like this is the trend that we might see in AI slowly creep up because it might be more profitable just to create unique user experiences because people really want that and they're more engaging and they hold people on the platform for more hours than a standard ch GPT does and I think that's definitely why we're seeing chat GPT and all of these other companies move to more personalized systems and we can also see here that character AI their website ranking has gone up and I'm guessing their traffic has gone up as well so it's pretty crazy because stuff like this could definitely um impact the future landscape of AI because that's why I'm thinking a companies like this going to be getting all of the investment and all of the funding um it will be uh interesting to see how that works but we can also see here that character AI is attracting a much younger demographic than chat gbt and other AI apps on the web for example character AI draws nearly 60% of its audience for the 18 to 24 year old age bracket a figure that held up over the summer as web type traffic to chat TBT drop so if you aren't familiar chat TBT basically a lot of people were using this for work and in the summer um the website traffic to chat GPT completely dropped it went down because a lot of people that were using it for like you know to complete assignments and complete work and stuff like that all of the students they stopped using it so I'm guessing that you know open a ey they don't want that to happen again which is why I'm stating that you know the next you know this summer is going to be really interesting and I'm guessing that is probably why uh open and ey are leaving their release to Summer if you haven't known gbt 5 is scheduled rumored to be released during summer and I'm guessing that's why they're waiting for that because they know that seasonality uh seasonality is going to dip during that time but once again that could bring the Limelight back on them so it will be interesting to see you know how this uh does happen over time in the future fut because I'm wondering once we get more updated data points if character Ai and these other platforms these AI platforms are going to be moving towards social media platforms because that's just a better way to generate Revenue we also did get a really strange model that was called

### GPT2 [12:43]

gpt2 I did make a video on this earlier on in the week and basically it was a model released in the arena and it was pretty weird because the gp22 chatbot could do some weird things where it could like zero shot solve uh you know hard questions but then it would also fail on some easier questions so many people were wondering what on Earth is going on because of this gpt2 chat bot first of all it's called gpt2 which is like why would on Earth would you call this gpt2 it's so weird that's such a weird thing and then it's like Sam Alman actually did also tweet about gpt2 further stoking the fire of rumors and speculation so it seems like right now we're just in a really weird State because a lot of people don't know what's going on and a lot of people have speculating this is a new model this is a different architecture I personally think that they're just testing a different reasoning engine and they're maybe just testing something on the back end of a system like GPT 4 maybe they fine-tuned you know a different model uh I have no idea why they called it gpt2 because some people are stating they fine-tuned the original version of gpt2 with qstar and it's AGI there are a billion different theories out there but I think it's something that's probably just based on the GPT 4 architecture that's a little bit different but then again the other argument says that why wouldn't they just call it GPT for preview I have no idea the point is that there's just a lot of rumors and as many have said before there's no reason why the world's most powerful AI company should put this much speculation into something that they could just State what they're testing I mean uh it's a bit weird then we had nist release its

### NIST Frameworks [14:13]

AI framework so the National Institute of Standards technology is an agency in the US Department of Commerce established in 1901 their primary function is to promote Innovation and Industrial competitiveness by advancing measurement in science standards and technology in ways that just basically enhance everyone's quality of life and this what they've released the nist AI 600-1 is a risk management framework and it focuses on the management of generative Ai and it's basically an entire framework on how to assess the risks that are coming with AI we can see that there are you know around 13 different categories right here and they cover stuffs such as cbrn information risk which is basically you know biological stuff you know toxic agents and things that you really don't want you can see how it's stating that the use of chatbots could facilitate analysis for you know synthesist for certain compounds and chemicals for non-experts and you can see red teamers uh red teamers are basically people that just test the model beforehand to try and pry out any vulnerabilities that the system might actually have and they were able to prompt GPT 4 to provide general information on unconventional cbrn weapons including common proliferation Pathways vulnerable targets information on existing biochemical compounds and in addition to equipment and companies that could build a weapon and it might increase the ease of research for adversarial users but be especially useful to malicious actors looking to cause biological harms without formal scientific training and we do know that terrorist attacks unfortunately I wouldn't say they're common place but they're common enough that we do have to take them as a real risk and as sad as that is uh we do have to understand that everything isn't sunshine and rainbows and we always must mitigate risk where possible which is why stuff like this is important and I don't want to say that open AI has gone to Great Lengths to ensure that this isn't going to happen which is why they stated that if they can do a model that would actually help anyone do this they're just not going to release it so that's why I've always stated that I think that we might never get AGI because they've always stated that if there is even like a 1% chance that a model might help someone do anything you know risky like as risky as this then they just not going to um release the system they also talk about confabulation which is the phenomenon way element basically just hallucinate and of course risk from confabulations may arise when users believe false content due to the confident nature of the response or the logic or citations accompanying the response leading the users to act upon or promote false information and this is really bad because you know they could cause doctors to make incorrect diagnosis and if llms just hallucinate and we don't know sometimes why they hallucinate it could be a problem because sometimes if you want to use this in critical infrastructure you would need to ensure that the system mathematically can't make a mistake because if you think about a system that's used by 100 million people 1% is 1 million people that get a wrong diagnosis that is not great so it's definitely something that you need to increase the reliability of these models in a way that's reliable now there are a bunch of different things here and I think this is of course like a good read I mean you've got like obssed and abusive content you've got bias and hom homogenization you''ve got information Integrity you've got the environment environmental impact you've got a lot of things that I think you know a lot of people haven't discussed but um it's important to see as well where the guidelines are around and if there are any like gaps that the government hasn't thought about and I know that might seem a bit crazy but like I said before guys it's AI there's always new problems and there's things that you just didn't know you'd even have to think about and with literally like you know 100 different tools coming online every single day there's always going to be something that you know you couldn't have foreseen so I think you know looking through this It's always important to see uh what the government is trying to look at then we actually

### Microsft Robot [18:02]

had Sanctuary AI announcing a Microsoft collaboration to accelerate the AI development for general purpose robots and general purpose robots are basically kind of like embodied AGI so this is a pretty huge deal if you aren't familiar with Sanctuary AI they are a humanoid robotics company based out of Canada and they are on the mission to create the world's first humanlike intelligence and they have a phoenix platform which is really good and it states here that building on the foundation of llms sanctuary AI is making progress towards the large Behavior models that ground AI in the physical world by enabling systems to understand and learn from Real World experience and they take a advantage of the carbon the AI Control System for its Phoenix robots and I think this partnership is really good because you know companies like Sanctuary AI they're pretty small in comparison to some of the other companies that they're competing with and essentially they recently also released their Phoenix Generation 7 which is pretty decent in terms of the dexterity the autonomy that this robot can already do now there was actually a recent demo earlier this year that they showcased in which they actually showed the robot doing various very good end to- endend tasks and unfortunately there just isn't that much hype around this robot I'm not sure what it is I think maybe it's just the look maybe the demo was wasn't edited in the way that it was supposed to but I do think that by far this is definitely one of the most underrated humanid robots that currently exists and I think maybe it might just be because the robot doesn't actually uh like we haven't actually seen the robot walk yet maybe that's one of the key features but um based on what I've seen so far the hands look really good and they looked really really fluid the last time I actually saw them the tactile feedback was the actual hands the durability the tactile sensors they looked really good as well in that demo maybe I'll include it in this video if I can find it I do think that was a problem and I do think as well that is quite important for some of these companies to focus on obviously you know focus on the robotics in the air at first but getting your product out there and getting people excited for it is definitely going to give you know increased motivation going to expose you know your company to more and more investors but I do think that this is definitely a key player so now that they are you know combined and doing a collaboration with Microsoft I think this company is bound to see some very interesting growth in that area because that is just the power of Microsoft so I'm also wondering if that collaboration extends to companies like openi where maybe they're going to be using their systems and I'm wondering if it's just exclusive to figure Robotics and of course 1X robotics so it would be interesting to see if opening ey does extend that so will be interesting to see if they do show a new demo with this robot because NN autonomy is of course becoming increasingly better as we

### Demis Hasbis On AGI [21:09]

progress we need to start thinking about um the types of architectures that get built so I'm very optimistic of course that's why I spent my whole life on working on AI and working towards AGI but um I'm I suspect there are many ways to build the architecture uh safely robustly reliaable reliably and in an understandable way and I think there are almost certainly going to be ways of building architectures that are unsafe or risky in some form so I see a sort of uh a kind of bottleneck that we have to get Humanity through which is um building uh safe architectures uh as the first types of AGI systems and then after that we can have a sort of uh a flourishing of many different types of systems that are perhaps sharded off those safe architectures um that are have some mathematical guarantees or at least some practical guarantees around what they do and if we get this right then I think we could be you know in this incredible New Era of radical abundance curing all diseases spreading Consciousness to the stars um human flourishing so I think this video is important because it actually touches on the key future that we're actually moving towards and this is so interesting because I mean you know whilst on this channel we do discuss AI all the time and we do discuss llms and stuff like that I mean there was actually a recent statement that I'm going to you know talk about later on in the video but you know working towards AGI of course is the big goal and talking about the architecture the safety you know which of course is a you know a Paramount thing you know if we want this to actually work and actually be scaled up uh very well uh I think so I see the fact that you know he's talking about if we can get through the bottleneck of AGI and doing it safely we can be living in a world of abundance I think that is most certainly true just think about a system that's let's say something that's hard to Fathom 400 times smarter than GPT 4 that is you know 400 times more smarter but is able to fit on any device that would pretty much fundamentally change society and how we interact in the day-to-day world and just like fundamentally radically change everything basically AGI or ASI that's you know entirely compact efficient and the architecture Works to run on pretty much any device that is going to change society Ally in terms of just trying to imagine and conceive what kind of world we're going to be living in and of course the Boton neck is Agi but after that that's why they call it the singularity now something that I did want to talk about and maybe I glossed over this so we had Francis chle I'm not sure if I said that right but he said these are all true simultaneously and if you're wondering why his opinion is important he does deep learning at Google and essentially he said these are all true simultaneously scaling up deep learning will keep paying off you can unlock more applications or higher performance on existing ones and scaling up deep learning it isn't the path to AGI he also said we aren't particularly close to AGI and llms did not represent a step closer we're not anywhere near full deployment of existing deep learning techniques and a huge amount of value remains to be created with the tech that we already has now what I'm wondering here is that is this because he thinks that Google are at a dead end for llms and the reason I think that M you know maybe he's stating this and this could be completely wrong because honestly to be honest with you guys I really don't agree with that last statement and not agreeing with someone as smart as he is might actually seem dumb in retrospect but um without further explanation I can't understand why I would agree that this doesn't represent a step closer at least a step closer to understanding AGI I mean even a failure actually advances you on the path because you now know when not to go so that wouldn't make sense fundamentally but anyways um point I'm trying to say maybe the reason he said this is because either one Google is at a dead end/ Plateau because every new model that has come out recently has only sort of been marginally better than gp4 and I guess we could potentially argue that is probably the case because these companies are trying to get their press release in a manner that shocks everyone in the fact that it's better than GPT 4 on all benchmarks which is why sometimes we see things like Gemini Pro that's only marginally better than GPT 4 so when people are talking about it they can say yeah it's better than GPT 4 and all benchmarks or maybe the fact is that GPT 4 the architecture used and the existing models the way that we're training them maybe there's just a plateau that we haven't fully understood yet and maybe Google is at that Plateau they keep training the models and they think um maybe this just isn't going to go ahead but I think with GPT 5 I do truly believe that considering the fact that Sam mman called GPT 4 dumb and many people like myself who use it actually think it as pretty incredible maybe just maybe there is a new architecture the opening ey probably has discovered in house and they'll probably be using it to power their next gbt 5 system at least some watered down version that people could use uh readly on a day-to-day basis so I'm wondering if this is you know going to change over the times but it definitely was a very controversial tweet we also had tiang

### Chinas New Humanoid Robot [26:14]

gong the humanoid robot actually did cover this in another video but long story short China is ramping up its efforts to compete with the US on many of the Technologies I'm sure the figure robot lit a fire under the competition balloon but I'm guessing that this is of course something that was in development for ages anyways and China actually did I'm going to talk about this in another video they actually did essentially issue some kind of statement where they were like look we need to take Robotics and AI seriously because this is going to be the next big thing and if you know anything about China is that they can make things really really quickly like if you look at their construction how quick they can build things even recently their company the byd I think it was they made an EV they're making tons and tons of E literally in 40 seconds so um it's pretty crazy what's going to be coming out of China I think we're probably going to see more stuff come out of China than the US in the next coming years because they do not mess around when it comes to this stuff then

### Github Copilot [27:09]

we also had get CB co-pilot which is fascinating because the world of software engineering continues to change so as a developer this is of course a tool that most people do use and as many people have spoken about Devon how it's expensive how it's this y um it's clear that a lot of people have now realized that this is something where there's a lot going into it there's a lot of money development there are so many companies on the back end that I've you know seen in terms of research papers and stuff like that so this space is going to get better and better and I think it's probably going to be one of the most interesting areas of AI development because we're not sure if this is something that's completely going away but I do think at least large parts of the software development ecosystem are going to be automated which isn't particular a bad thing in terms of if you're a software developer now but I think maybe in the future this might be something with new architectures that it's fully automated and you know I might get a bit of flack for saying that but I think it's always important to at least look where the trend is going and at least try to be there on the horizon so this GitHub co-pilot workspace you can see how there's a new developer environment you can actually sign up for this I'll leave a link in the description workspace then builds the fill Plan full plan which is fully editable and then you can then edit the code directly in the copilot workspace and it's just a lot more effective so yeah this is pretty incredible stuff
