OpenAI WHISTLEBLOWER Reveals What OpenAI Is Really Like!
11:37

OpenAI WHISTLEBLOWER Reveals What OpenAI Is Really Like!

TheAIGRID 19.06.2024 20 023 просмотров 489 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Learn A.I With me - https://www.skool.com/postagiprepardness 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ Links From Todays Video: Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (3 сегментов)

Segment 1 (00:00 - 05:00)

so there was a recent interview that was actually quite insightful because it was actually from the open AI employee who made stunning predictions on how the future of AGI is going to be announced SL implemented on many different levels the interview was on the hard Fork New York Times podcast and the person I'm talking about is Daniel Koko talo now this is pretty crazy because there are four insightful things and the whole interview is a much longer but there are four insightful things that I do want to talk about because they actually show us okay some very new things that a lot of us didn't know about open AI in terms of the actual safety structure that goes on open Ai and of course a few things that actually went on in open AI including a few things on Microsoft's part and his date on when he predicts AGI now I want you all to remember the previous dates that we've spoken about in the other videos remember the dates that we spoke about AGI because it seems that there is a trend from every single open AI employee that I've seen I've looked at blogs I've looked at posts I've looked at interviews and there seems to be one prevailing date that carries on coming up when the mention of AGI now it could just be a coincidence but I am earing on the side of maybe not but without further Ado let's dive into exactly what's going on so in this first clip here basically surprisingly he actually talks about something that I didn't even know happened okay and this is like breaking news kind of B basically he talks about how Microsoft the kind of safety internal team that they had they were supposed to basically approve Microsoft's release of GPT 4 to I guess you could say a part of India or somewhere in India and apparently Microsoft just went ahead and they just released it which was crazy so take a look at this cuz I couldn't believe this like if someone like if I saw this on a forum I would have been like ah it's probably not true but uh as someone who's working on at open AI I think you know taking this at face value that is pretty crazy uh and uh this board had some people from Microsoft on it and some people from open AI on it and it was supposed to uh deliberate and then eventually approve uh major deployments on the gp4 scale or bigger so it had been set up prior to reaching the gp4 scale but it was set up with the expectation that once we hit that level and there on uh it would be something that would gate deployments um you'd have to get approval from the dsb while the DS B was still deliberating about this uh me and various other people started hearing rumors that Microsoft had just gone and deployed it anyway um specifically in India um and so we had some conversations about this we made inquiries to managers and so forth um I don't want to get into too much detail about who said what to whom and so forth but I came away disappointed because it seemed like we had this self-governing structure in place and there was a violation of the structure and rather than you know holding Microsoft accountable for doing this we were uh we were afraid to damage our relationship with Microsoft given the sensitivities involved you know they have all this compute we're trying to so there we have it ladies and gentlemen this is you know a shocking Revelation on the fact that you know a lot of people were suggesting that Microsoft's partnership with open AI was in fact going to you know I guess you could say contend with the safety people and you could see here that there was a clear SA the agreement that they were supposed to get approval before pushing the model out for public release however this didn't happen so this kind of you know uh oversight by the safety board that didn't happen is it's pretty remarkable that this you know kind of happened and it's definitely a revelation that you know considering all the things that have happened at open AI maybe it didn't Shock Me entirely but it definitely did surprise me to the extent that I think that you know a lot of people do every single human being and every single organization is just strict you know Military Star professionals that would never make a wrong mistake but at the time it's humans who make mistakes and who have incentives and who maybe sometimes are greedy and what we can see here is that in action that humans are you know these flawed creatures and just because it's Microsoft it doesn't mean that they might rush out a product if they believe it's going to advance their company's efforts which you know in this area I think is a little bit you know telling for the future of how things might be now of course I think in the future with you know nationalization of open AI something that has been recently predicted considering the fact that they recently did just hire someone who worked at the NSA for a very long period of time but overall I think this is pretty crazy now another thing that he goes on here and this is pretty crazy because I thought okay and this is where he talks about Elia satk and the culture of work after the coup and basically he says that after the Ilia coup there was this

Segment 2 (05:00 - 10:00)

entire culture at open ey that was basically F you safety people that is crazy like I would have thought that the safety people would be some of the most respected in the company but it seems like after that there was this entire thing that kind of uh just was just strange I guess after ilas eser and Helen toner and like after that whole fight it was like oh man this things have sort of polarized like I don't think that we're going to like talk it out and and pivot um and then also just sort of on a personal level I think there was a lot of anger directed at uh sort of safety people such as myself uh because the perception was that uh that the old board was being dishonest and that they didn't really fire Sam because he was uh insufficiently candid towards them but rather they fired him because they wanted to slow down and he didn't and they need an excuse um so I think that as a result there was yeah that was uh something that was pretty telling after Elia Sova left I'm not surprised that the culture around the work actually did change but considering that there was this entire culture around safety people that was one that was surprising but considering the fact that you know now looking at the fact that the super alignment team has now disbanded open ey basically maybe not disbanded the team entirely but they did actually get rid of the team maybe it's like whoa what on Earth was going on at the internal workings maybe this is part of the reason that Jan like left maybe the reason that you know considering the fact that some of the researchers when we actually think about what occurred some of them were actually even fired would that all play into exactly how open AI you know managed some of their decisions of course it is quite hard to say but it can't be denied that this may have been a contributing factor especially after the board coup so it's clear that there was a shifting Dynamics in terms of you know how safety people were viewed in the AI world so this is something that I find to be truly fascinating now the next Point here is of course more fascinating and this is where he talks about his definition now up the top I've put AGI by 2027 and that's exactly what he says 2027 ladies and gentlemen is the same timeline SL year that I've heard from pretty much every you know available source of course people like Sam Alman have said that you know 5 to 10 years I think he's become a little bit more conservative considering the fact that you know we're of course going to have to you know get the the hardware side built out and in terms of the compute aspect of things but I think that it's kind of interesting that all open AI employees are predicting such short timelines for monumentally transformative technology I think that publicly available information about the capabilities of this models and the progress that's been happening over the last four years is already enough to make extrapolations and say holy cow it seems like we could get to AGI you know in 2027 or so or sooner um you don't need any secrets to be able to make that inference and then for the second thing about like is this a big deal is this really dangerous again you don't need any specific secret so yeah he said just even looking at public publicly available information not even looking at things that are really you know private you can kind of see that 2027 is going to be the year when AGI potentially gets here to make extrapolations and say holy cow it seems like we could get to AGI you know in 2027 or so in the labs and secet that is pretty incredible I mean it's one thing you know seeing it on a blog post or so or hearing three or four times you know but uh for someone that's working at open aai I think it's pretty telling on the kinds of technologies that they're working on and how people expect them to evolve based on the kind of you know advancements that we're going to make in the near future now this interview was remarkable it was honestly kind of interesting to see someone you know get on camera quite like we had Leopold Ashen brener the other day talking about all the things going on at open AI as well as the future of artificial general intelligence now I think there was also this part of the interview that wasn't by Daniel katago but the New York Times do some very interesting reporting on Sam alman's Empire now Sam Alman is the CEO of open Ai and that does make him a pretty powerful person but what if I were to tell you that this might actually extend into a further SL different area to where Sam Alman could potentially be the most powerful man in the world now that might sound like an exaggeration but listen to what they say first cuz what they State first is true and it's insightful and I'm not saying that being the most powerful man in the world is a crazy thing or a bad thing but I'm just saying that I think it brings up a very interesting question with as to what are Sam alman's motives for the moves that he is making in the AI industry what this is about for Sam mman right because this is a person who is already rich but if he has a piece of all the biggest and most important

Segment 3 (10:00 - 11:00)

companies in AI then if you fast forward to the time period that Daniel's worried about when AGI is arriving one person is going to have a staggering amount of power in that Universe right he will be the CEO of open Ai and he will be a major shareholder of many of the most important companies in the space why do I bring this up in that kind of world you want to know who is this person and what sort of governance structures are around him to Reign him in because right now it does seem like we're on trory for Sam Alman to be one of the most powerful people in the world totally so yeah there we have it guys Sam Alman could be one of the most powerful people in the world and I don't think that is an overstatement considering the fact that you're at a company that could potentially be at the helm of AGI and then of course could have major Partnerships with all of the other companies that depend on your technology which is going to be integral for majority of the software companies I mean would you be the most powerful man in the world I mean I don't think there's going to be something thing as powerful of course you've got politics but with artificial super intelligence that eventually will shift the power dynamics because there's no way to outsmart it I don't see how he wouldn't be so let me know what your thoughts on this are I definitely think that this interview is worth a full watch um I'll leave a link to it in the description but um it's pretty insightful that first of all Microsoft decided to release a model anyways second of all the culture of open AI was basically you know Screw You safety people and then of course his prediction on there being AGI by 20 27 that is uh truly remarkable so let me know what you guys think about that um and I'll see you guys on the next one

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник