Scientists Just Released Something That Could Change Medicine FOREVER (Metagene -1)
14:13

Scientists Just Released Something That Could Change Medicine FOREVER (Metagene -1)

TheAIGRID 14.01.2025 20 955 просмотров 544 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Join my AI Academy - https://www.skool.com/postagiprepardness 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ 0:00 Prime Introduction 0:39 Wastewater Analysis 1:29 Metagenome Overview 2:31 Technical Pipeline 3:55 Model Performance 5:26 Bio Risks 5:49 OpenAI Safety 7:26 Access Requirements 9:09 MIT Study 10:58 Prevention Measures 11:42 Team Research 13:30 Future Implications Links From Todays Video: https://metagene.ai/ Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (12 сегментов)

Prime Introduction

so whilst AI is increasing in terms of its capabilities and its ability to you know crash all the benchmarks related to cognitive tasks one company out there called Prime intellect is focusing on a completely different area this is a company that is focusing on the risks related to Advanced AI but not in the way that you might think most people would think I'm talking about Advanced ASI taking over the world but there's actually a much more realistic risk which is AI allowing Bad actors to create pandemics like covid-19 by using current AI tools or future AI tools to create bioweapons and this is something that this company is focusing on every

Wastewater Analysis

drop of waste water carries a complex signature of Life trillions of DNA and RNA fragments that tell the story of human health at a societal scale but reading this story has remained beyond our reach straightforward extrapolation of today's systems to those we expect to see in 2 to 3 years suggests a substantial risk that AI systems will be able to fill in all the missing pieces enabling many more actors to carry out large-scale biological attacks as biotechnology advances exponentially a new frontier emerges detecting biological threats before they turn into pandemics we believe they're an important step forward in a race to the top on safety we hope we can Inspire other researchers and companies to do

Metagenome Overview

even better enter metagene 1 the metagenomic foundation model trained on over 1. 5 trillion DNA and RNA base pairs from Wastewater samples metagene is powered by a 7 billion parameter Transformer architecture capable of analyzing entire microbiomes at a societal scale it identifies subtle genomic patterns serving as an early warning system for emerging pathogens this helps enable a planet scale early pathogen warning system to prevent the next pandemic metagene excels in detecting anomalies useful for monitoring novel biological threats developed collaboratively by USC Prime intellect and the nucleic acid Observatory it represents a step toward helping Safeguard Humanity in the age of exponential biology and we're open sourcing it together we can accelerate scientific AI to create a safer and healthier future so now this is where we

Technical Pipeline

actually get to take a look at how this works so this image basically illustrates the pipeline for how metagene 1 which is the model described in a couple of their research papers and the model that we just saw right here and this basically shows the Wastewater sequencing so samples from the Wastewater from various locations are actually collected then these samples actually contain tiny pieces of genetic material from many organisms such as bacteria viruses and even unknown species then we've got the Deep metagenomic sequencing of the DNA and these are processed these samples using sequencing technology to break it down into readable dna/rna sequences then we've got the pre-training data where the sequences are converted into a format that the AI can understand using a method called bite pair en coding and this creates a huge data set with over 1. 5 trillion base pairs of information then you get all of that data and then it gets turned into a powerful AI model with 7 billion parameters trained using this data and this is the data that allows the AI to learns the pattern in the genetic information and then once this is trained this is where you can see that these arrows lead off to things like identifying pathogens such as harmful viruses or bacteria detecting unusual patterns such as anomalies in genetic data and of course you know

Model Performance

filling in missing pieces of sequences the goal here with this entire you know model is to use AI to monitor for potential Health threats like pandemics and better understand the world around us so now when we actually do take a look at the benchmarks we can see that metagene 1 achieved state-of-the-art performance on pathogen detection and metagenomic embedding benchmarks basically stating that this is the best inclass AI system when it comes to detecting pathogens and of course genetic embedding so this is an important you know part of AI technology and I think this is really important because like I said before the majority maity of people within Industries right now are pretty much just focused on making sure that they get up because the majority of people working in AI right now at least when it comes to llms I don't think their priority is this kind of technology which is why I think when companies decide to focus on this kind of stuff it's really important because it's something that is going to become increasingly important in the future and one of the things I think that we need to have with the AI industry is that like we need to understand that whilst yes it's great to have this technology a lot of the times stuff that people are talking about the fact that ASI is going to be soon and there's going to be all of these Advanced Technologies that this stuff is coming sooner than we think so it makes sense to build these kind of protections around future disasters a lot quicker than they do arise like we want to prevent this rather than cure itar AMD actually talks about this in Al leex Friedman podcast where he says that look this is going to be something that is coming and he actually testified which you saw at the introduction of the video that this is going to be something that you know may help Bad actors in the future right I you know I testified in

Bio Risks

the Senate that you know we might have serious bio risks within 2 to 3 years that was about a year ago things have preceded a pace uh so we have this thing where it's like it's it's surprisingly hard to address these risks because they're not here today they don't exist they're like ghosts but they're coming at us so fast because the models are improving so fast

OpenAI Safety

now openi has acknowledged that these new models increase the risk of misuse to create bioweapons they actually spoke about this when they unveiled the 01 series and I'm pretty sure most of you guys know what the 01 series is that is the more advanced version of the AI models it's actually looking a lot more comprehensive in terms of its ability to help individuals who are trying to become Bad actors and for those of you who think that this isn't true open AI actually released this with their open ai1 release so with every model release there is usually a safety section and this is something that the majority of people won't pay attention to because number one it's not really flashy and number two you can't really gain anything from it it's not like you can look at this paper and find some way to make your AI smarter that you can use in your daily life this is just something that talks about the future a risks so what they talk about here is biological threat creation and this is basically where you can see that they look at the score for both 01 preview and 01 mini and it says here that our evaluations found that 01 preview and 01 mini can help experts with the operational planning of reproducing a known biological threat which meets our medium risk threshold because such experts already have significant domain expse this risk is limited but the capability May provide a leading indicator of future developments now the important thing to note here is that they do state that the model does not enable non-experts to create biological threats because creating such a threat requires handon laboratory skills that the models cannot replace basically stating that

Access Requirements

whilst yes these models might be able to piece together the knowledge that may exist on the internet and it might be easier to use in llm you still need a real world laboratory where you're going to be synthesizing these pathogens or whatever it is that you're going to be creating such as these biological threats so of course if you do manage to jailbreak these models considering the fact that these models are highly specialized in the sense that the majority of people that I know do not use 01 preview and 01 mini or for the matter of fact that 01 series on a day-to-day basis and what I'm essentially saying here is that these models are really the best models when it comes to focusing on science things like coding essentially anything that requires real cognitive ability these are the models that have the most impact so I do think that this is something that you know is going to be looked into a lot more because when we do take a look at how the future is going to evolve the majority of people aren't you know really even going to be paying attention to what these super Advanced models are going to be able to do they're really going to be paying attention to the models that are going to be able to do stuff in the GPT Series so stuff that is essentially able to you know talk to you on a day-to-day basis the majority of individuals simply just have a very basic experience with AI but of course like I said already these models can help people now the craziest thing about this is that there were more studies done which is why companies like Prime intellect with the metag gene 1 those kind of things are definitely needed because when we take a look at this research we can see here it says can large language models democratize access to dual use by biotechnology and this was a study done from the Massachusetts Institute of Technology which is of course MIT now this one was particularly interesting it was done in

MIT Study

2023 so it has some pretty concerning implications because if it was done in 2023 when we only had GPT 4 imagine what that study could show us now but basically they talk about how large language models such as those embedded in chatbots are accelerating and democratizing research by providing comprehensible information and expertise from many different fields however these models may also confer easy access to dual use Technologies capable of inflicting great harm to evaluate this risk the safeguarding the future course at MIT tasked non-scientist students with investigating whether llm chat Bots could be prompted to assist non-experts in causing a pandemic in 1 hour the chatbot suggested four potential pandemic pathogens explained how they could be generated from synthetic DNA using reverse genetics supplied the names of DNA synthesis companies unlikely to screen orders then of course identified detailed protocols and how to troubleshoot them and recommend that anyone lacking the skills perform reverse genetics engage a core facility or contract research organization and it says here this is kind of the conclusion area collectively these results suggest that llms will make pandemic class agents widely accessible as soon as they are credibly identified even to people with no or little you know laboratory training and I think that this is somewhat concerning considering the fact that you know pandemic class agents okay pandemic class agents are going to be widely accessible soon and this was something that was available you know in 2023 so pandemic class agents being widely accessible I think we're going to have to do a really good job at somehow ensuring that these you know Technologies are safe in the sense that these Advanced models aren't helping Bad actors do these kinds of things and they

Prevention Measures

talk about you know promising nonproliferation measures including pre-release evaluations of llms by Third parties creating training data sets to remove these harmful Concepts and verifiably screening all DNA generated by synthesis providers or used by contract research organizations and Robotics Cloud Laboratories to engineer organisms or viruses so of course the main thing is that like you know it's pretty hard to do this now without access to laboratory and all that kind of stuff but in the future it's quite likely that with Advanced AI this is going to become a lot easier now this was a study that just happened in 1 hour and this was in 2023 so you can imagine what is possible with the 01 series now interestingly enough there was another study done that was you know earlier this year and well not earlier this year

Team Research

it's now 2025 but in 2024 and it talked about the operational risks of AI in large scale biological attacks result of a red team study now they've come to somewhat of a different conclusion they talk about this research involving multiple llms indicate that biological weapon attack planning currently lies Beyond the capability of Frontier llms as assistive tools and they said that the authors found no statistically significant difference in the viability of plans generated with or without llm assistance which is pretty interesting considering the previous study they talk about this research did not measure the distance between the existing LM capability Frontier and the knowledge needed for biological weapon attack planning which is you know pretty important because they're basically saying that they're not measuring the distance between where we are now and where we need to get which means that Gap could be coed pretty quickly as Dario amod said in the future this is something that is coming fast and considering how quickly we've seen that you know China's been able to catch up to the 01 series it's been able to do pretty well in that regard it's quite likely that we're going to see more developments that manage to catch up to open Ai and even some cases surpass open Ai and with the open source ecosystem that can't be good for biological weapon attack planning and it even says here that given the rapid evolution of AI it's prudent to monit future development in llm technology and the potential risks associated with its application to biological weapon attack planning which is of course really important so they talk about you know the authors managed to identify what they term unfortunate outputs from the llms and these outputs generally mirror information readily available on the internet suggesting that llms do not substantially increase the risk associated with biological weapon attack planning now I do think that is a good thing but as we do know llms that can't do something today it doesn't mean that they're not going to be able to do it tomorrow so it's

Future Implications

pretty likely that you know in the next 2 to 3 years it's going to become a major issue probably even by the end of this year as Dario amid said and I think this is going to change the entire a ecosystem because what happens when we have models that are so smart that the certain capabilities about them means that they can't be released well they're going to be released but so lobotomized that the average person might not be able to have access to them so it's going to be like maybe you need to have a special access code to actually talk to the model because if you it just becomes too dumb so I think that might be a thing in the future like maybe you're going to have special clearance to talk to a certain model so that you know it's not used in a bad way it will be interesting to see how the air industry prepares for this but if you enjoyed the video see you in the next one

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник