# Snapchats New AI, Elon Musks New AI, GPT4, AutoGPT, , Facebooks New AI [Weekly Dose Of AI #1]

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=0vuDxEh79Uc
- **Дата:** 23.04.2023
- **Длительность:** 13:16
- **Просмотры:** 42,493

## Описание

Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
#IntelligentSystems
#Automation
#TechInnovation

## Содержание

### [0:00](https://www.youtube.com/watch?v=0vuDxEh79Uc) Segment 1 (00:00 - 05:00)

so let's take a look at everything that was released last week in Ai and boy oh boy pay attention because there is a lot of stuff that you missed coming in at number one we have gpt4's multi-modal feature being I guess you could say expedited by Microsoft essentially the researchers at Microsoft that decided to release a visual chat gbt where you can actually use chat gbt to chat with certain images now the videos that you're seeing on screen are essentially demos and there is of course a research paper in which they describe exactly how they use chat TPT to describe and chat and of course generate these images now of course you can actually use this if you want the web page is still live all you're going to need is an open AI key and once you simply put that in you're going to be able to Simply use the web page and interact just like I did now Bloomberg also released their large language model which is going to be centered around Finance think of it like chat gbt but essentially just specifically trained to look at Finance models what they did was they trained this large language model on all the finance papers that they ever looked at and essentially the resulting language model that they got was one that can accurately predict and make certain guesses about Market sentiments and over time they're going to be improving this software by adding more and more parameters it had Facebook releasing their new segment anything model and essentially what this segment anything model was they used AI to detect every single item or object that might be in an image now this isn't just insane because they're detecting absolutely everything that's in an image they actually know what the item is in that specific image so it's an image identification and a classifier so right here you can see that it's able to identify how many cats are in this entire image which is going to be good ad has a variety of many different applications worldwide we also had them showcase exactly how Facebook's segment is going to be used in a real world scenario so right here you can see that these AR goggles that are being deployed on this user are able to accurately identify whatever is in the field of view so many people do have trouble seeing things such as those who are partially blind or those with visual impairments maybe there are also some who see a new object and they need to get that updated into their interface as quickly as possible as maybe there are some foreign objects or some specific objects that are very rare to find this could host a whole world of many different applications and it just goes to show how quickly this is being used and how it can actually be used for everyday scenarios where maybe you want to sort something find something quickly now what's also cool is that they also showed us what their plans are for the future for example they showed us exactly how they plan to integrate this into everyday cooking activities such as here when you're making a new meal you might be confused on the cooking steps so it actually is able to identify exactly what items you need how much they are and how you can actually use them effectively we're also able to see maybe if someone is forgetful about feeding their dog you know this is going to be something that is very useful for people who do have potentially some disabilities where they don't remember everything that was going on in that particular environment something that's really useful then we had Microsoft release their new software which was Jarvis essentially what they did was they combined chat gbt with hugging GPT and essentially they managed to get chat GPT to complete pretty much any tasks that a user can ask even if it is multimodal so of course the first request was please generate an image whereas a girl is reading a book and then make sure her post is the same as the image that I've just submitted and then describe this image with your voice and of course you might be thinking how on Earth is Chad gbt or Jarvis going to do this because it doesn't have a voice you can see right here that the four stages then are commenced based on all the tools that Jarvis has access to so we had stage one which was planning you can see right here the chat CPT slash Jarvis is managing to plan out exactly what it is going to do by selecting everything and the sequencing of orders then of course we have the model selection where the Jarvis AI decides okay I'm gonna use this specific model because it needs to complete this specific task then after stage two what we had was of course stage 3 which is the task execution where it uses the AI softwares whichever it may be and then of course it has the response generation to generate the desired output so definitely something that was really good and of course the final response was there as you all can see on screen and Jarvis proved to be a very good success and it actually documents how it does this so it definitely proved that Microsoft are still every single week they're getting better and better at utilizing the AI to make different requests able to be done and of course you can see right here they're able to identify exactly what is going on in the image and remember this isn't chat gpt4 it was just something that is done with a lot of the tools online now of course you can actually use Jarvis if you want you can go over to the website hugging GPT and all you're going to need is some API keys and you're going to be able to use that website bear in mind it is a little bit buggy but it is something that you can use now something that I thought was

### [5:00](https://www.youtube.com/watch?v=0vuDxEh79Uc&t=300s) Segment 2 (05:00 - 10:00)

very interesting was the deployment of some robots to help Patrol Times Square in New York take a look at this announcing three new policing Technologies in New York City the K5 autonomous security robot the spot digital robot and the star Chase GPS attachment system our job is to fight crime and keep people safe and these tools are significant steps forward in that vital Mission we are here to get stuck up we also had someone showcase exactly what's possible when you provide gen 1 by Runway with a perfect driving image and a very good video reference I do want to say shout out to this person that created this because it was definitely very creative and just goes to show exactly what's possible with a perfect combination of the Artistic Styles used to use text to video and definitely shows What's going to be capable in the future then we had Amazon releasing their new software which is called bedrock and essentially allows people from all around the world businesses alike to access a large model of AI software which they can easily use and fine-tune to their business specific needs now you're about to see a clip of the CEO talking about why they launched this and how effective it's going to be companies don't want to go through that and so what they want to do is they want to work off of a foundational model it's big and great already and then have the ability to customize it for their own purposes and that's what Bedrock is which we now say which is it'll give you access to large language models from anthropic from stability AI from Ai and you can see right here that this is on Amazon's bedrock's new page you have text generation you have chat Bots you have search you have text summarization image generation and personalization and you can all see right here that you have the choice of the foundation models that you can use that is going to be available on the Amazon Bedrock platform that you can then fine tune for your business's specifics needs well they also did release as well was code whisper which is essentially an AI coding companion which is going to help people get more done faster so definitely another area in which Amazon is deciding to innovate then of course we had Nvidia release its text to video paper in which they showcase many different examples of how they manage to use stable diffusion to then generate some very high fidelity videos that are quite temporarily consistent you can see right here from all of these examples this is exactly what they've been able to do I'm not sure how long they've been working on this but a lot of the footage that I have seen does look far better than some of the previous examples that we have had from the other companies that have tried before and text the video is by far one of the most hardest things that we are looking to see when it comes to generative Ai and you can see on screen right now that some of these Highway examples are very good and some of these landscape examples are also looking very effective as well as they seemingly manage to stay pretty smooth and pretty coherent in what the animation looks like so I think Nvidia definitely are moving in the correct direction because they seem to have a step ahead in front of where everyone else is at this moment in time in the text to video then of course we had Google's own texture video where you had input images being able to generate a fully fledged video a lot of these videos do look very well I mean the difference between this and Nvidia is honestly quite interesting because they both use very different ways to generate the result this one uses a multitude of images whereas nvidia's is mainly based on text to video but they can do exactly what Google is doing here where you input a number of driving images and then essentially you do get an output result where of course your Driving Image can actually be replaced or placed in a random environment once again Google showing how far ahead they are in the AR race and that they're actually not that far behind of their competitors and of course we had Elon Musk who decided that he was going to create his own truth GPT because he realized and many of the users who use chat GPT realize that it is somewhat inherently biased towards certain Arguments for example right here this article was talking about how chat tpts inherent biased was particularly focused on certain topics and declined to respond when asked certain questions about certain topics and then of course Elon Musk has decided that he wanted to create a chat TPT version that would pretty much do anything the user requested as long as it's within the standard ethical confines but just reporting the data a path to AI dystopia is to train an AI to be deceptive so yeah I'm going to start something which you call proof gbt or uh a maximum truth-seeking Ai and it also talks about why it's not really about politics but more about AI safety that tries to understand the nature of the universe and I think this might be the best path to Safety in the sense that an AI that cares about understanding the universe uh it is unlikely to annihilate humans because we are an interesting

### [10:00](https://www.youtube.com/watch?v=0vuDxEh79Uc&t=600s) Segment 3 (10:00 - 13:00)

part of the universe hopefully I would think that I think you know because yeah like we like Humanity could um decide to hunt down all the chimpanzees and kill them then of course we had Google announcing that it's going to be launching a new AI powered browser which is going to be integrated into Google called project Maggie and essentially this is going to be a direct competitor to Bing's recent launch of Chachi BT within the browser and they are going to try and expedite this as quick as possible and release it next month which is actually going to be sometime in May now Google decided that they needed to get this out as quick as possible because recently Samsung and Apple were in talks of actually switching up browsers in which that they were going to be having as the default option on their phone so Google was pretty much forced to start developing this as soon as possible then of course we had the release of Auto GPT which is now powering agent GPT which is essentially a form of AI which you give it one prompt and then it runs off and decides to organize itself with a bunch of tasks and then it starts to scale the internet and essentially execute on those specific tasks now this has absolutely blown up on the GitHub and this is something that people are now you using to generate a bunch of different texts articles and complete many tasks and I do think that this is likely what the future of AI is going to be because it removes a lot of the work that people initially will have to do depending on the nature of the tasks and many people are now discussing the fact that once you get these certain agents to be able to do certain things companies are going to hire these agents for Cent on the dollar and then of course we might actually see some major layoffs coming then of course we had quora released their new AI chatbot which is called po now what's good about PO is that you're able to customize different chat Bots to your liking to fit a certain style or character then talk with that chatbot in any manner possible it's also cool about Poe is that they also give you access to gbt4 they Claude instant which is another competitor to chat gbt and as you can see right here I was setting up Bots that allow me to interact with someone in Steve's jobs person anality for advice on YouTube and then of course they have the original sagebot which is a general knowledge bot which was released by themselves now of course what we also had was we had Snapchat release their own AI bot which is essentially my Ai and it's called their sidekick but if I'm being honest the results that I've seen are floating around the internet are very up and down some people report great responses While others report responses that just simply aren't even that accurate at all but I'm guessing that they wanted to rush this out pretty quickly which is why I guess we're getting this in its unfinished sort of state of course we had you know The Bard which was initially rumored with bugs and many mathematical failures getting a very nice update which is going to help it be powered with some of the processing power from Palm which is the 540 billion parameter model which is very effective at using robots to do many in real world tasks and this should make Bard much more effective at the tasks that it does

---
*Источник: https://ekstraktznaniy.ru/video/14883*