# Bloombergs New INSANE BloombergGPT Takes the Industry By STORM! (NOW UNVEILED!)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=3ZqJaL1jJN4
- **Дата:** 11.04.2023
- **Длительность:** 11:18
- **Просмотры:** 16,332
- **Источник:** https://ekstraktznaniy.ru/video/14900

## Описание

Bloombergs New INSANE BloombergGPT Takes the Industry By STORM! (NOW UNVEILED!)

Welcome to our channel where we bring you the latest breakthroughs in AI. From deep learning to robotics, we cover it all. Our videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on our latest videos.

Was there anything we missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
#IntelligentSystems
#Automation
#TechInnovation

## Транскрипт

### Segment 1 (00:00 - 05:00) []

so another company just released their large language model introducing Bloomberg GPT this 50 billion parameter large languages model was purpose built for finance tasks only and you can see right here Bloomberg made this announcement a couple of days ago and we're going to get into exactly what Bloomberg is so if you don't know what Bloomberg does or what their company does Bloomberg is essentially a finance company that makes software and of course they also make the Articles and the videos that you see online their most popular piece of software is the Bloomberg terminal which allows many investors to access real-time financial data with unparalleled accuracy now their research paper is really interesting but we're gonna first go over the article that they released along with the research paper to see exactly what they've commented on so essentially you can see right here that one of the key points that they actually made was that the data that they're looking at and what they're going to be using this for one of the main things was sentiment analysis and this is going to be key examining what kind of predictions Bloomberg makes and I'm going to be pretty sure that this is going to change the future and pretty much shape the future of how Bloomberg make their predictions and how it kind of shapes what kind of Articles they put out especially relating to finance and the kind of data that they give it's definitely going to be really interesting because now they have an AI in the mix as well so what's also interesting was that they also talked about how the data collection and the curation of resources that they used to train this large language model so essentially Bloomberg were poised very good for this because they've collected Financial language documents over the span of 40 years and essentially they just pretty much pulled all this data and it allowed them to get 363 billion tokens but that wasn't only the tokens that they use you can also see right here that they also added a 345 billion token public data set to create a large training corpus with over 700 billion tokens so essentially what they wanted to do is they wanted to combine the public data with their data too and essentially they made a 700 billion token training data set which allowed them to create the 50 billion parameter model so it's definitely a large amount of data that it was trained on now what's interesting here is this is the table that they presented us with so right here you can see at the top we have finite specific tasks then at the bottom we have the general purpose tasks too so it's very interesting to see what they compared it against and pay attention here because this is where I think not shady stuff is going on but a very interesting comparisons are drawn so one of the first large language models that they compared it against was meta's old light language model which is opt which is a 66 billion parameter large language model and they also compared it against Bloom which is another large language model which was done by a thousand researchers and they got together and they made this large language model so moving back to this original image this original table we can see that on the first area they actually managed to compare this also to gpt3 and if we take a look at Bloomberg GPT score we can see that on the mlu it scored a 39. 18 GPT no X scored 35 opt scored 35 and Bloom scored 39 which gbt3 scored 43. 9 now it's actually very interesting where this AI currently is based on its currently data set just on the broad tasks now remember this is just on the general purpose task so this is not where we're going to analyze it but what's interesting here and I want you guys to pay attention to this that gpt3 does do better than blueberg GPT on mlu reading comprehension and linguistic scenarios okay but what's interesting is that when we go on to the other table right here is that they've actually removed gpt3 and any GPT models from open AI from the comparisons now I'm not sure why that is but I'm guessing that maybe there is the potential chance that GPT 3 or GPT 3. 5 outperformed Bloomberg gpt's model in even Finance related tasks which just goes to show how powerful chat GPT might be I mean why would they not include it when comparing it they only compared it to GPT Neo X and opt 66 billion and Bloom 176 billion which are indeed older models so of course you can see right here Bloomberg gbt does outperform these other models by a decent margin but what's very interesting is going to be how does this compare against other large language models that can be used for other tasks such as palm and those kinds of things and if you do want a direct comparison you can see that on the mou page Bloomberg gbt ranks at 39 on that specific test and it doesn't do that well but it's not particularly trained for that task on the charts that it is trained for I do think that over time it's going to get better and better so it's definitely going to be interesting to see just how good this does get while it's not going to be the best at natural language processing tasks with those related to standard questions that the average user might ask chat EBT in this paper that we're going to go over I've looked at some examples that show that this is definitely going to be a very powerful model especially once it's completely fine-tuned for finance and you're about to see these examples right now so you can see right here in the

### Segment 2 (05:00 - 10:00) [5:00]

paper they talk about financial tasks because of course that's exactly what they're trying to use this large language model for so what they actually want to do is make sure that this large language model is trained completely just on those specific Financial tasks and this is a really good example of how these large language models can be trained in the proper way so it says take the example of sentiment analysis and essentially sentiment analysis is just basically how people feel about this markets maybe it's going to be gold you know Commodities maybe it might be stocks for example right now the sentiment analysis if Bitcoin is going to drop ten thousand dollars is probably going to be bearish or bullish depending on where you stand on the markets but that's neither nor neither so essentially it says we're a headline such as company to cut 10 000 jobs portrays a negative sentiment in the general sense but can at times be considered positive for financial sentiment towards the company okay and it might result in the stock price or investor confidence increasing okay so what they're essentially saying here is that if there is a general large language model and it sees the title company to cut 10 000 jobs they're going to instantly think okay this is you know of course bad this is a negative sentiment because of course people are losing their jobs but if you have a large language model that is trained on financial data it's going to be considered positive in a financial sense which is very important for people who are investing because they need to understand the context of everything so that's why they used the uh you know Bloomberg GPT compared it with Bloom and gptx and of course opt so of course you can see right here that these are the kind of prompts that they're giving these machines of course you can see what is the sentiment answer neural positive negative uh what is a sentiment on X and of course they've got a binary classification right there so it's definitely very interesting and of course it does go to show exactly why these kind of individual large language models are going to be fine-tuned for specific things to be used what's also interesting that I did want to point out in certain instances where Bloomberg gbt is compared against other models it actually does perform exceptionally well and even sometimes on compare par with gpd3 as stuff like reading comprehension which is definitely very interesting considering this is their first large language model which just goes to show how fast the developments in AI are increased so they even hear say that across dozens of tasks in many benchmarks a clear picture emerges among the models with tens of billions of parameters that we compare to Bloomberg gbt performs the best so it says while our goal for Bloomberg gbt was to be the best in-class model only for financial tasks we concluded that we included general purpose training data to support the specific domain training so essentially they said the model still has attained abilities on general purpose data that exceed similarly sized models so basically what they're saying is that we train this pretty much just based on financial documents and somehow it's even managed to do better at other models that were trained on General Knowledge stuff whereas this one was only trained on finance stuff and it still did better than those so honestly that is truly interesting I'm not sure whether it's terrifyingly good or just good so you can see right here as well this is where they're using Bloomberg GPT to generate valid Bloomberg query language so essentially this is the kind of language which that Bloomberg's I'm guessing that this is what their system or software needs and of course you can see that when once you ask it these questions it immediately presents with the right data which is of course good and this is where stuff gets interesting so you can see right here they've got the input and of course they've got the output and essentially what we have here is instances of where they're going to use Bloomberg GPT to generate short headlines in suggestions so of course you can see right here this is the US housing market shrank by 2. 3 trillion yada yada and essentially what they want is they want to condense that financial data to an accurate headline and you can see right here it shortens that financial data to home prices see biggest drop in 15 years which is of course really good and it's going to be useful for them because of course they're a media company as well and when they want to you know output data they might be able to just input an article and get out a perfect title that's very fine-tuned on finance you can see right here um they've got the input they've got the output lnt's global economy more resilient than expected right here Google was sued by yada yada Google suit for the Monopoly in online ad market so yeah it seems that it seems to be working very well and this is what they're going to be using in their Media Company when they want to generate accurate Financial headlines very quick so now this is additionally where things get interesting you can see right here they've asked Bloomberg gbt certain questions on financial tasks and you can see they said the CEO of this company right here the CEO of Silicon Valley Bank definitely a very important company as of recent times um Bloomberg gbt does get it right GPT Neo X and flan T5 XXL gets it wrong and of course you can see right here Bloomberg GPT gets many of these answers right about the CEOs of these Financial companies or just companies in general and of course you can see right here it says testing the ability of GPT or Bloomberg GPT to recall the names of CEOs of companies okay and it says we sample up to three answers and present all of them if they are incorrect and it says that Michael Corbett was the CEO of

### Segment 3 (10:00 - 11:00) [10:00]

Citigroup up until 2021 highlighting the importance of an up to date model so essentially what they're saying is that you always need something up to date and that's why GPT Neo X actually did get this wrong and this is where inconsistencies can occur now for those of you who are wondering if Bloomberg did just rush into the AI race they've actually had this AI sector for quite some time um and they've actually had this LLP natural language processing area at Bloomberg for quite some time as well so this isn't something that they've just quickly hired some people and just rushed out um some kind of you know GPT software jumping on the trend they've actually been doing this for quite some time and they decided to um you know add this research paper and they've been you know hiring people that are in the sector for quite some time now and I've said that a lot but essentially what they want to do is just I guess that they want to just stay on top of things and honestly they seem like a pretty smart company and you can see right here they do have an AI Engineering Group that publishes research papers regularly so I think this is going to be very interesting to see exactly what kind of final version they do come out with because I do think it's going to be amazing at Financial tasks if gpt's Improvement rate has been anything if you know chat similar it's definitely going to be very interesting to see what happens to Bloomberg thank you
