Open AI  CEO SHOCKS Everybody About GPT-5 (GPT-5 Update)
9:09

Open AI CEO SHOCKS Everybody About GPT-5 (GPT-5 Update)

TheAIGRID 24.04.2023 456 951 просмотров 4 238 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Open AI CEO Finally Admits The Truth About GPT-5 (GPT-5 Update) GPT 5 Will be Released Incrementally: https://ai.plainenglish.io/gpt-5-will-be-released-incrementally-5-points-from-brockman-statement-plus-timelines-safety-e931f3a8bad3 Sam atlman interview : https://www.youtube.com/watch?v=4ykiaR2hMqA Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos. Was there anything we missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience #IntelligentSystems #Automation #TechInnovation

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

for those of you who are wondering exactly what's going to happen with gbt5 this video is going to tell you quite a lot because the CEO of openai actually had some very interesting information to say in a new recent interview so check out this article right here you can see that the CEO has confirmed that gpd5 is not in training and won't be for some time so let's take a look at the clip Which surfaced online in which she details the exact details of how they are training gpt4 and exactly when we can expect gpd5 and if they are going to be trading this model anytime soon was where we need the pause like it's actually when I open it on an earlier version of the letter claims opening eyes trained gp5 right now we are not in a hole for some time um so in that sense it was sort of silly but we are doing other things on top of gpd4 that I think have all sorts of safety issues that are important to address and we're totally left out of the letter um so I think moving with caution and an increasing rigor for safety issues is really important the lettering I don't think is the optimal way to address it or see that the open AI CEO Sam Altman clearly discusses the role that GPT 5 will play in the next level of AI Evolution and he discusses that he needs to make sure that this AI is actually going to be released safely and that is of course something that was brought up in this recent letter Paul's giant AI experiments an open letter we call on all labs to immediately pause for at least six months before training of AI systems more powerful than gpt4 you can see as of recording this video it has around 27 000 signatures the reason this is actually drawn up the attention of many world leaders and policy makers is because many influential people such as Elon Musk have decided to sign this letter and publicly state that GPT 5 is likely to be somewhat dangerous if there aren't certain protocols in place before it is released hence the reason for this pause in AI so of course we now know that Sam Altman has responded to this by stating that gbt5 isn't even being trained at the moment and won't be for some time now of course we know that in the letter it states that we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than gpt4 but you have to understand that many people haven't even caught up to gbt4 and gbt4 hasn't been fully rolled out yet we're still missing many features such as the larger token size window and of course the visual multi-model feature which was teased in the upcoming trailer so there's many different things that are to be discussed and there is something really interesting coming from Google that definitely impacts how gpt5 is going to be trained that I need to discuss later on in the video now of course Sam Altman did State some more stuff about gpt5 and the letter on Twitter he stated right here that one thing up in the debate about the pause letter I really agree with is that open AI should make great alignment data set and Alignment evaluations and then release those so essentially what he's saying here is that whichever alignment structure that openai use they should definitely release that because it's going to be something that other companies need to use as well in order to make sure the AI is going to be released safely as AI safety is definitely one of those issues that if you don't fix it it's going to be very hard to put the genie back in the bottle as some would put it then of course we had Brockman talking a lot about gbt4 and gpt5 in terms of how training times are actually done so he said the underlying spirit in many debates about the pace of AI progress is that we need to take safety very seriously and proceed with caution it's our key Mission we spent more than six months testing gbt4 and making it even safer and built it on years of alignment research that we pursued in anticipation of models like gbt4 so essentially what he's saying is that he knows that when they release newer AI tools that of course safety is one of the biggest priorities because they don't know how powerful this technology is going to be because a large amount of the times there are these things called emergent abilities where essentially when an AI gets smart enough it's certain abilities just become present because of the large-scale nature of that large language model and these abilities aren't taught they simply just emerge and of course if you have abilities that do emerge you're going to have to ensure that there is a long rigorous process in which you ensure that the AI is completely safe before it is released into softwares like chat gbt or many different API tools which use the open apis large language models to run their different businesses if you're confused by my description just there take a look at this definition an emergent ability is a characteristic or skill that arises spontaneously from the interactions and complexities within a system rather than being explicitly programmed or designed into it now I watched two videos that were very interesting about the dangers of this and I think you should take a look at a few Clips because if you understand exactly the examples that we've been seeing you'll understand why a pause on gpd5 may be necessary to

Segment 2 (05:00 - 09:00)

understand and make sure that these models are 100 completely safe these models have capabilities we do not understand how they show up when they show up or why they show up again not something that you would say of like the old class of AI so here's an example these are two different models GPT and then a different model by Google and there's no difference in the um the models they just increase in parameter size that is they just get bigger what are you ask the these AIS to do arithmetic and they can't do them and at some point boom they just gain the ability to do arithmetic no one can actually predict when that'll happen here's another example which is you know you train these models on all of the internet so it's seen many different languages but then you only train them to answer questions in English so it's learned how but you increase the model size and at some point boom it starts being able to do question and answers in Persian no one knows why so right there you can see that this is definitely something that puts the alarm Bell for creators who are making Ai and even those who are trying to regulate AI because imagine making gpt5 and is able to do certain things that we don't understand and remember some of these abilities we don't yet realize until maybe months or even years after for example there's theory of mind which we only realize that gbt4 had and then when we looked back we realized that there was some theory of Mind in GT3 and in gbt2 now this is also prevalent in other AI software in a recent interview where Google CEO was talking with 60 Minutes they detail how it was able to learn an ability which it didn't previously have coded into it take a look so I can't play the clip due to copyright but I will have a link to the interview in the description of this video so I do think that is a very important point that has been brought up by many of the AI researchers who are looking to regulate this software because of course as we do know we're kind of stepping into the unknown as the technology kind of just rapidly advanced but of course with regards to gbt4 there are still other companies like Google who are working on trying to beat Chachi BT and gbt4 by pumping billions into many smaller companies that have large language models with them for example they are pumping Millions into a bot that's called Claude next which is very good and quite close to chat gbt and their goal is literally to create something get this that is 10 times more powerful than gpt4 within 18 months so that is definitely a very quick ramp up if that is their final Target which means that we could definitely be seeing the AI race heat up even more than we even thought and having these companies rush out software which sometimes might not even be ready just so that they can potentially own the largest part of the future now Sam Altman also discusses right here things about a good AGI future number one the technical ability to align a super intelligence and this is definitely something that we must focus on because if something is super intelligent it is likely to have goals beyond our understanding and it's probably going to be one of those pivotal moments in the future if we don't get it right and of course he also talks about an effective Global regulatory framework including Democratic governance where essentially you just have this framework that everyone adheres to because it's the only way to ensure that we do have a safe AI so let me know your thoughts on this do you think tbt5 is going to be coming quicker once opening out realized that Google are starting to invest more in other companies to outpace them or are we going to be seeing a Slowdown in the release of AI products now something that I did forget to mention and this is a very large point is that tpc5 is going to be released incrementally meaning that there's going to be subsequent updates every single month or every single couple of months that will lead to the eventual version of GPT 5. now the reason they did this is because they want to address the safety concerns and continuously enhance the AI model rather than one big rollout which could present a number of different issues so definitely a really nice change foreign

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник