Digital Biomarkers and Games to Accelerate Geroscience | Keith Comito at Longevity Biomarker Summit

Digital Biomarkers and Games to Accelerate Geroscience | Keith Comito at Longevity Biomarker Summit

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI

Оглавление (6 сегментов)

Segment 1 (00:00 - 05:00)

Hello everyone. Just make sure I got the clicker working here. Okay. All right. So, uh firstly, I'd like to thank uh the organizers and hosts at Infinit City, Prospera, Rejuve. ai. It's my second time here in Roaton, and I'm glad to be here again uh for this important and exciting event on longevity biomarkers. As many of you know, I'm Keith Kamito, president and CEO of the Lifespan Research Institute, a 501c nonprofit which accelerates the science and systems for longer and healthier lives by uniting researchers, investors, and the public through our various programs. I am also a member of decentralized science organizations such as Vita, Jellyfish, and Gitcoin. And I'm also a technology inventor having worked with various sports and media companies most recently Disney where I uh was technology lead of advanced research as mentioned and where I still consult to create digital biomarker related technologies involving face voice uh and body tracking data. And it's precisely these types of technologies I'd like to talk about today as they have the potential to squarely address a key challenge facing aging research uh which we're discussing here at this event and many other events namely the need for clear biomarkers to assess biological age that are both accurate and responsive to interventions and which can facilitate the creation of relevant indications that can be targeted in clinical trials which can in turn accelerate the field and expedite the delivery of successful aging interventions to the public. Uh, and the key point I'd like to make regarding this is the immense value of functional biomarkers based on non-invasive data such as audio and video data and how prioritizing the creation of these can lead to meaningful wins quickly uh that are in turn able to catalyze systemwide adoption of age related biomarkers generally. As an example of lowhanging fruit in this regard, of the over 7 million Americans currently living with dementia, an estimated 58. 7 of them are either undiagnosed or unaware of their diagnosis. Largely because 97% of primary care physicians don't proactively screen for this or even ask about it during checkups because, amongst other reasons, it's awkward to bring it up. And yet, technology exists right now which could provide early detection based on your voice alone. And coupling this with emerging drugs that were mentioned earlier like Laqui could mitigate demention progression at the personal level and consequently be worth tens of billions of dollars to the US economy. So why isn't this technology systematized in every annual checkup? In addition to chronic diseases, this diagnostic potential also maps to age- related infectious diseases as well. During the uh co 19 pandemic for example I worked on projects surrounding vocal diagnostics of the disease in relation to machine learning work that was coming out of MIT and other organizations that were leveraging convolutional neural networks to assess audiobased parameters like vocal cord strength respiratory performance muscle degradation and even sentiment uh using spectrogram analysis and techniques such as male frequency septal coefficients or MFCC uh and surprisingly at the time diagnostics that were deployed were much more sensitive for COVID than the antigen testing that we were using at the time compared to ground truth PCR tests even for asymptomatic carriers even with high specificity as well. And if you're asking yourself, why wouldn't something like this false positive on a regular cold? That's because it's not detecting flam or anything in your throat like that, it's simply because a disease like CO 19 affects your neurological system, which literally changes how your vocal cords vibrate and this can be detected. And as you can imagine, this translates to other conditions that affect your neurological system as well, such as Alzheimer's disease. Now, these voice-based examples are for downstream diseases where aging is a major risk factor when, of course, what we're really after is comprehensive assessment of biological age itself that is based on non-invasive diagnostics. This might seem far-fetched, but is it really that hard to imagine? Human beings do this every day with our eyes and ears and the machine learning models in our brains, right? you know, ooh, she doesn't sound so good or wow, he looks really great for his age, right? So, why should it be so difficult for machine learning models to do the same? And in fact, there has been promising work developed in a variety of different non-invasive diagnostic areas in addition to voice-based uh such as face and walking gate analysis that if combined together in the right way could arrive at a purely non-invasive proxy assessment for biological age that has the added narrative benefit of being based on aspects of aging that people directly experience and care about. How do I look? How do I sound? Can I move? Can I do the things I love to do? For example, video data is already being used to assist in the creation of

Segment 2 (05:00 - 10:00)

machine learning classifiers for conditions such as sarcopenia, cognitive frailty, and heat stress, which uh intersects actually with technology in use by sports and media companies such as ESPN, for example. Uh able to precisely track the skeletal positions of UFC fighters, for instance. uh even able to do things like calculate newtons of force transferred in a punch uh and identify the kinds of strikes that are being delivered uh in combat. Now this intersection is apparent in relation to facial biomarkers as well. For example, consider the Disney Fran model used in altering the facial ages of actors in movies which you've probably seen at least once. Because after all, what is an important first step in visually shifting someone's facial age? understanding what age their face currently actually is. Right? The goal for us is to put all of these technologies together into a multimodal data collection and analysis unit able to assess deviation between chronological age and biological age and be responsive to interventions by just analyzing face vocal and gate based biomarkers. Uh an initiative we're calling SAGE or maybe Sage. I'm still trying to figure out which is the catchiest version of that. So you can tell me later. And importantly, this could be engineered to collect the associated data while collecting other data in a very streamlined manner such as survey questions and ideally additional tests such as reaction time, uh, virotactile sensitivity, visual perception, these sorts of things, which were actually elements of an age meter device that we crowdfunded at lifespan years ago, uh, on the recommendation of Dr. George Church. In clinical scenarios, such a multimodal unit could be more robust and also collect, you know, bloodbased and other biomarkers at the same time as well, which is important because as Dr. Matt Camberlin and others on our scientific advisory board have noted, uh it is unlikely that any one biomarker test alone is going to do the job by itself, which was also discussed here. Uh what we need at least initially is the collection of everything all together all at once. And furthermore, assessing biomarkers of multiple modalities can facilitate interesting correlations between them and the answering of fundamental questions such as how frequently does each biioarker need to be analyzed in order to asssure uh precision and accuracy. It sounds like a really obvious question, but in many cases this really isn't known yet. And importantly uh also how well biomarkers based on comparatively easy to collect data such as what could be accomplished on a mobile phone compare to those uh that would be collected with more invasive or you know costly data collection methods. uh answering questions like these is actually the basis of a project I worked on personally with the chief science officer of Disney uh surrounding multimodal data collection kiosks which can pair the collection of non-invasive data simultaneously with other types of data uh such as phabbotomy PCR testing etc. Such units, as you can imagine, could be incredibly useful in clinical scenarios, especially if their use was standardized. And there's also an opportunity to leverage the correlations I mentioned earlier to validate diagnostics, which could be used in nonclinical scenarios through, for example, a phone in your hand. Uh, now there are challenges to doing this, of course, such as how to capture highfidelity information when you're not in coached lab scenarios, working with uh unstructured data, and using comparatively paired down hardware. But new technologies are emerging uh to solve those challenges such as real-time Marcois 3D tracking for hands, face and body simultaneously uh from a single uh mobile device able to facilitate not only uh fun entertainment use cases like this but also the detection of muscularkeeletal and psychoot parameters useful in the early detection of diseases like Parkinson's or assessment of functional fitness and decline which is very relevant to us and without needing multiple cameras even wearable or motion capture suits or having to send pre-recorded video to a cloud service and wait for tracking data to be processed later. This can also further be bolstered by additional technologies able to overcome typical limitations with body pose analysis. For example, when occlusions or non-standard input scenarios can occur and cause erratic predictions. You know, one of the old use cases here was when UFC fighters are grappling with each other and you know, the skeletal pose, as you can see, just is freaking out, right? It doesn't know what to make of that. Uh, so one such technology I worked on here was called Gemini, which combines skeletal pose tracking methods with machine learning based system pattern recognizers. So for example, if that pattern recognizer could identify that two human bodies were coupled in a guard, it could share that information with the skeletal pose tracker and help it resolve that confusion. And uh similarly, the skeletal tracker could supply information to the pattern recognizer in a circularly healing uh manner. Uh obviously this particular use case is a different one than

Segment 3 (10:00 - 15:00)

we might be used to here. But you can see how such approaches can be used to augment data collection for gate analysis approaches for example and expand tolerance for temporary moments of uh bad data. Uh as a side note here I'm a bit proud of the uh the visual quintuple antandra we pulled off with the logo for this project which I am quite pleased with. Switching gears on the audio side of things. Uh, one limitation to overcome in bringing clinical diagnostic power to mobile devices is that clinical approaches are typically based on analyzing specific vocal utterances such as coughs, which not only restricts training data collection to coached lab scenarios, but this also limits the deployment of the resulting models because, for example, models trained on coughs would not work likely on passive speech. And furthermore, such approaches preclude serendipity because you already have a specific hypothesis built into your data collection model. In this case, the hypothesis being that coughs are somehow ideal to analyze. Uh consequently, another project that I worked on uh with Disney CSO is a scaffolded process of uh machine learning models daisy chained together that can operate on unstructured data such as normal human speech and in this specific use case can automatically segment vocal data into phone subgroups, testing them all in parallel without making any assumptions as to what phone would be best. overcoming the earlier described limitations and broadening potential data collection scenarios to include passive speech from regular mobile devices. Now, I'm mentioning Disney quite a bit in these examples, and this is not only because of my experience leading R&D projects with them for several years, but also because of an important point worth underlining here. Technology and media companies already possess advanced technologies that are relevant to us and they can leverage unique co-development mechanisms that can allow entertainment related IP with health related applications to sometimes be freely licensed or donated to these fields of use often in collaboration with standards organizations like NIST and the American Society of Testing Materials or the ASM that are able to align the effort of various organizations together to create standards which can then be published enforced by organizations like the FDA and then used as a basis for clinical indications. So in many ways I think this is a key like missing piece of the puzzle that we haven't focused on yet the involvement of these types of standards organizations. And as a quick aside on the importance and role of such initiatives, it's instructive to describe part of what went nonoptimally during the co 19 defense a few years ago which I was a part of. When co 19 hit uh Disney was in a particularly tough spot as you can imagine given all the physical venues involved in the operations, theme parks, sporting events etc. And as a consequence of that, almost every mitigation strategy you can think of was explored, you know, including spread prevention tactics like, you know, masks, of course, in the uh the NBA bubble, if anybody remembers that. It wasn't a literal bubble like that. Um, and diagnostics such as those based on wearables, antigen testing, etc. All of which prove to be remarkably less effective than one would have hoped with antigen testing for example only having sensitivity in the 40 to 60% ranges performing much worse for asymptomatic carriers as well and all of that is what led us to investigate voice uh as a detection strategy as described earlier and at this time many groups such as the one from MIT mentioned previously were working on this in relation to co 19 uh and at Disney we had interfaced with a variety of such groups you know universities like MIT, US government initiatives, international and foreign groups, uh the Indian government for example, and two surprising things jumped out right away. One, these groups in many cases didn't even know each other existed even within the US government. And two, as a consequence of that, they were all collecting slightly different data sets for no scientific reason. This in turn led all of their machine learning models failing to achieve the necessary accuracy when it would have been the most useful due in part to not having sufficient data uh to train them on. Uh to give a very concrete example of this, some groups were working on frequency based analysis such as looking at the resonance inherent to coughs or certain phonms like the sound m and as such they would be collecting data geared for this type of analysis. for example, the Harvard sentences, specific sentences that are rich in every phone. Uh, in addition to, you know, survey data like age, sex, medical history, etc. And alternatively, some groups were working on linguistic based analysis, which could be a flag for brain fog, uh, David was talking about earlier, and cognitive impairment. Uh, and as such would be collecting a different type of data. For example, asking a patient to describe a picture in as detailed language as possible. In addition

Segment 4 (15:00 - 20:00)

to collecting that same type of survey data, in each of these cases, uh the collected data sets were being used to train different types of machine learning models. Spectrogram analysis using techniques like uh male frequency septum coefficients in the case of frequency data and automated linguistic models in the case of that linguistics data. And importantly, because those working on frequency analysis did not collect data such as those unscripted picture descriptions and because those working on the linguistics data did not collect coughs or the Harvard sentences, the respective data were not useful to each other's analysis. Whereas, I'm sure as you can see, if both had just, you know, collected a standardized and comprehensive data set that would have taken just a little bit longer with each of their patients, all of them would have been able to have useful data uh that would have bolstered their individual analysis and not impinged on anyone's uh proprietary approaches. Doing this is not complicated. It just requires standardization and cooperation. And in this case, if this was done years ago during the peak of COVID, uh I believe it likely would have contributed to countless lives being saved by just, you know, simple organizational change. And in this case, this led us to reach out to the ASM to create a task group on physiological biomarker related data collection of which I'm the chair to address exactly these sorts of issues with an aim of arriving at a non-invasive diagnostic unit able to collect multimodal data in a standardized manner that can pave the way for targetable clinical indications for age related functional decline. uh with the road map essentially being uniting technologies across fields to develop multimodal data collection and analysis units create mobile device versions precisely understanding the relative fidelity between these and clinical units working with organizations like NIST and ASDM to codify related standards that can in turn promote the adoption of clinical indications for functional age by the FDA which can then be targeted in clinical trials. As you can imagine, this could also naturally pair with initiatives that were mentioned earlier, such as the RPA Prosper program that is seeking to develop a highly accurate and notably easy to measure metric that quantifies age associated health changes and response to interventions. Uh, and the X-Prize Healthspan Prize, which could potentially benefit from additional and comparatively quick ways to assess competing projects with regards to improvement in functional age. Now to do all of this correctly is quite the undertaking as you can imagine requiring fieldwide collaboration and here's a very rough example of what that can look like uh leveraging the collaboration of our various allies and uh unique co-development relationships as mentioned earlier with major tech companies. But separating out and starting with attaining a clinically relevant biioarker for non-invasively detectable functional age or whatever we want to call it can also allow us to move much quicker than tackling the entire gamut of all potential biomarkers and have the unique advantage again of assessing aspects of aging that have an immediate tangible meaning to people. Success with this would be of great value not only to researchers and the public but also to startup companies and big pharma because if there were a kiosk for example able to collect all needed data easily in a few minutes and the use of this became systematized and standardized in all clinical trials. It could essentially turn every clinical trial into an aging trial uh kind of by accident and give every therapy or drug play an additional indication it might serendipitously succeed on increasing everyone's shots on goal. Furthermore, uh, reliable mobile diagnostic units can also pave the way for true decentralized science, which I've spoken about at length in previous talks and won't retread too much here. Other than to note the ability to have standardized metrics collected from thousands of participants also like Jasmine was talking about can enable new kinds of clinical trials which can leverage combinatorial approaches uh regarding dosing variations and intervention combinations while still retaining the statistical power you need through those large participant sizes. This can be especially relevant in cases where the therapeutic itself uh can be softwaredriven or digital in nature such as with our own mindset project at lifespan research institute which utilizes machine learning to facilitate adaptive non-invasive potential interventions for dementia. I mentioned this a bit in the panel yesterday which also as you can imagine has the power to be uniquely inspiring to the public which again I mentioned before as if after decades of work in a trillion dollars spent against dementia yielding little results if an online participant group using stimulus deployed on their phones like light and sound are able to reverse or even mitigate Alzheimer's at all that

Segment 5 (20:00 - 25:00)

would be an earthquake potentially catalyzing a shift towards partic participant led clinical trials, decentralized clinical trials that can also speak to the needs of the present moment where the average person at least in the US is losing faith that existing health care systems will take care of them. You know, I think there's something very powerful and inspiring to say you don't have to wait for trickle down therapies if you are a part of building them together from the bottom up. And uh in relation to infinite games, it's worth noting that such approaches are readily combinable with blockchain technologies and gaming projects, some of which I've spoke about in the past, such as a philanthropic roguelike game based on the fable of the dragon tyrant, which is well known, at least in our field, a metaphorical story by the philosopher Nick Bostonramm regarding our relationship to disease and death and how we might fight them. Specifically, uh what if in a game such as Dragon Tyrant, as you progress through the dungeon and get closer and closer to defeating the Dragon Tyrant, metaphor for aging, uh the game itself were to take on some of these audiovisisual stimuli modalities, potentially fighting the dragon tyrant in your own body. uh and since a game like this would need to be uh immutably tracking things like decision-making speed longitudinally, that's a cognitive biomarker. So if we were to structure a game so that this happens but only for a subset of players with the other players receiving sham stimula uh instead, that starts to sound like a recipe for a decentralized clinical trial administered through a game. And furthermore, this could even be combined with other types of biomarkers, you know, like those being collected here and other game related technologies I've spoken about in the past, such as specific types of dynamic NFTts, which can change based on philanthropic contributions or biomarker data, which I've called proof of philanthropy and bioavatars respectively. Uh truly closing the loop uh the gamification loop on incentivization of healthy actions in users. Imagine if there's a huge prize pool in this game. You could win a million dollars if you're the first person to kill the dragon tyrant. Uh but your character becomes stronger if you are taking care of your health, giving you an edge, things like this. It's a much more um userving model than some of the other exploitative uh dopamine hacking game loops that you know certain games that you probably know and love uh employ. Uh these are just a few examples of how emerging digital biomarker technologies can potentially transform your phone uh into the clinic of the future. And while obviously this was just a quick overview um for Mindset Sage and Dragon Tyrant uh initiatives in particular, we have clear road maps, next steps, uh exciting potential collaborators uh withus and are only limited uh by resources with significant milestones that would be able to be achieved for just a few million dollars total in each case. So, if you'd like to see any of these projects move forward uh or drive forward our other programs at Lifespan Research Institute, please uh come talk to me. We're equipped to receive stocks, donor advised funds, crypto, you name it. Uh and nonprofits like us of course need your support in these interesting times. Not only to drive forward uh paradigm shifting technologies and research uniquely accomplishable by nonprofits uh such as foundational research into understanding and addressing scinesscent cell accumulation and mitochondrial uh dysfunction but also to support educational and outreach initiatives such as our news outlet on lifespan. io IO that some of you probably read, collaboration videos with large YouTube channels like course Kazak, which engage millions of people regarding the science and societal benefits of aging research, and our new uh public longevity group initiative combining machine learning and sentiment modeling to assess societal views on longevity, predict narrative risks, and help us all design precision messaging to accelerate broad adoption of aging research. uh essentially non-invasive biomarkers for public sentiment and through public engagement initiatives like these uh and ecosystem building initiatives like our education program and longevity investor network which investors can freely join to hear pitches from curated startups in the field and our own forthcoming spinouts. Come talk to me. Uh we've been working for many years to make the ground fertile for everyone in the field and thereby create more investors, more companies, more available resources uh to support you all by doing all we can to educate and convince the world to prioritize this incredibly important work. And one great way to support all of this is by joining our lifespan alliance, a growing consortium of missional aligned organizations that believe in the force multiplying power of our work uh and who are afforded various opportunities to engage more deeply in our mission such as sponsoring our events and crowdfunding campaigns. priority in amplifying on mission initiatives through our

Segment 6 (25:00 - 25:00)

large-scale news and video channels and the ability to collaborate on paradigm shifting initiatives such as those discussed in this talk. So, please use this QR code to learn about the lifespan alliance links for general donations and our news outlet to help anyone get informed and involved however you can. And I think that's all I have time for, but if you are interested in learning more or uh in becoming Iron Man or shooting lasers out of your eyes in the pursuit of longevity, please come find me and I can show you some things on my phone. Thank you.

Другие видео автора — Lifespan Research Institute

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник