# OpenAI x Broadcom — The OpenAI Podcast Ep. 8

## Метаданные

- **Канал:** OpenAI
- **YouTube:** https://www.youtube.com/watch?v=qqAbVTFnfk8
- **Дата:** 13.10.2025
- **Длительность:** 28:49
- **Просмотры:** 105,599

## Описание

OpenAI and  Broadcom are teaming up to design our own chips—bringing lessons from building frontier models straight into the hardware. In partnership with Broadcom and alongside our other partners, we’re creating the next generation of AI infrastructure to meet the world’s growing demand.


In this episode, OpenAI’s Sam Altman and Greg Brockman sit down with Broadcom’s Hock Tan and Charlie Kawwas to announce a new partnership focused on custom AI chips and systems that could redefine what’s possible in computing.


Chapters
00:00 Announcing the partnership
03:06 The scale of AI infrastructure
06:03 Collaboration and innovation in chip design
08:49 Historical context and future vision
12:10 Role of compute in AI development
15:01 Optimizing for specific workloads
18:02 Journey towards AGI
21:00 Future of AI and compute capacity
23:50 Wrap-up and future projects

## Содержание

### [0:00](https://www.youtube.com/watch?v=qqAbVTFnfk8) Announcing the partnership

Andrew Mayne: Hello, I'm Andrew Mayne, and welcome to the OpenAI podcast. Andrew Mayne: Today, we're excited to be breaking some news involving Broadcom and OpenAI. Andrew Mayne: Joining me from OpenAI is Sam Altman and Greg Brockman, Andrew Mayne: and from Broadcom: Hock Tan and Charlie Kawwas. Sam Altman: A lot of ways that you would look at the AI infrastructure build-out right now, Sam Altman: you would say it's the biggest joint industrial project in human history. Charlie Kawwas: We're defining civilization's next-generation operating system. Greg Brockman: That is a drop in the bucket compared to where we need to go. Sam Altman: It's a big drop... Andrew Mayne: So what are we talking about today? Andrew Mayne: What brought you all together? Sam Altman: So today we're announcing a partnership between Broadcom and OpenAI. Sam Altman: We've been working together for about the last 18 months designing a new custom chip. Sam Altman: More recently, we've also started working on a whole custom system. Sam Altman: These things have gotten so complex, you need the whole thing. Sam Altman: And we will be starting in late next year, Sam Altman: deploying 10 gigawatts of these racks of these systems and our chip, Sam Altman: which is a gigantic amount of computing infrastructure to serve the needs of the world to use advanced intelligence. Andrew Mayne: So this is going to entail both compute and chip design and scaling out? Sam Altman: This is a full system. Sam Altman: So we worked, we closely collaborated for a while on designing a chip that is specific for our workloads. Sam Altman: When it became clear to us just how much capacity, inference capacity, the world was going to need, Sam Altman: we began to think about, could we do a chip that was meant just for that kind of a very specific workload? Sam Altman: Broadcom is the best partner in the world for that, obviously. Sam Altman: And then to our great surprise, this was not the way we started. Sam Altman: But as we realized that we were going to really need the whole system together to support this, Sam Altman: as this got more and more complex, it turns out Broadcom is also incredible at helping design systems. Sam Altman: So we are working together on that entire package, and this will help us even further increase the amount of capacity we can offer for our services. Andrew Mayne: So, Hock, how did this come about? Andrew Mayne: When did this start? you guys first talk about working together on this? Hock Tan: Well, other than the fact that Sam and Greg are great people to work with, it's a natural fit because OpenAI has been doing, continues to do the most advanced models, frontier models in generative AI out there. Hock Tan: But as part of it, you continue to need compute capacity, the best, latest compute capacity as you progress in a roadmap towards a better and better frontier model and towards super intelligence. Hock Tan: And compute is a key part, and that comes with semiconductors, and as Sam indicated, more than semiconductors. Hock Tan: And we are, even though I say it myself, probably the best semiconductor company out there. Hock Tan: And more than that, AI is a very, very exciting opportunity for us in terms of we are, my engineers are pushing the innovation envelope and newer generations of semiconductor technology.

### [3:06](https://www.youtube.com/watch?v=qqAbVTFnfk8&t=186s) The scale of AI infrastructure

Hock Tan: So for us, collaborating with the best generative AI company out there is a natural fit. Andrew Mayne: And this isn't just chips, it's going out to scale like 10 gigawatts. Andrew Mayne: And I have trouble kind of even understanding that. Andrew Mayne: What does that even mean when you're talking about 10 gigawatts? Sam Altman: First of all, you said it's not just chips that Hock touched on this too, Sam Altman: but the vertical integration point is really important. Sam Altman: We are able to think from like etching the transistors all the way up to the token that comes out Sam Altman: ask ChatGPT a question and design the whole system, all of the stuff about the chip, the way we design Sam Altman: these racks, the networking between them, how the algorithms that we're using fit the inference chip Sam Altman: itself, a lot of other stuff all the way to the end product. And one of the many reasons I'm so Sam Altman: excited about it is by being able to optimize across that entire stack, we can get huge efficiency gains Sam Altman: and that will lead to much better performance, faster models, cheaper models, all of that. Sam Altman: As you get that better performance and cheaper and smarter models, one thing that we have consistently seen is people just want to use way more. Sam Altman: So we used to think like, oh, we'll optimize things by 10x and we'll solve all of our problems. Sam Altman: But you optimize by 10x and there's 20x more demand. Sam Altman: So 10 gigawatts, 10 incremental gigawatts, this is all on top of what we're already doing with other partners and all the other data centers and silicon partnerships we've done. Sam Altman: 10 gigawatts is a gigantic amount of capacity. Sam Altman: And yet, if we do as good of a job as we hope, even though it's vastly more than the world has today, Sam Altman: we expect that very high-quality intelligence delivered very fast and at very low price, Sam Altman: the world will absorb it super fast and just find incredible new things to use it for. Sam Altman: So what is the hope with this? Sam Altman: The hope is that the kinds of things people are doing now with this compute, Sam Altman: writing code, doing more and automating more and more of enterprises, generating videos in Sora, Sam Altman: whatever it is, they will be able to do that much more of it and with much smarter models. Andrew Mayne: It's amazing. So Greg and Charlie, when you think about historically, when people have tried to Andrew Mayne: develop chips or hardware to suit whatever was the current modem for using computing at that point, Andrew Mayne: what examples have you looked upon historically to figure out how to plan forward? What's been inspiring you when you think about this? Greg Brockman: Well, I'd say the number one thing, honestly, is working Greg Brockman: with good partners. I think it's very clear that we, as a company, are not able to do everything Greg Brockman: ourselves. And getting into actually building our own chips for our own specific workloads Greg Brockman: was not something we could do from a total standstill without working with Hock and Charlie Greg Brockman: and Broadcom. So, it's just been really incredible to lean on their expertise, together with our Greg Brockman: understanding of the workload. And it's been actually very interesting to see the places Greg Brockman: where OpenAI is able to do things very differently

### [6:03](https://www.youtube.com/watch?v=qqAbVTFnfk8&t=363s) Collaboration and innovation in chip design

Greg Brockman: from the rest of the industry Greg Brockman: or the way that things would historically be done. Greg Brockman: For example, we've been able to apply our own models Greg Brockman: to designing this chip, which has been really cool. Greg Brockman: We've been able to pull on the schedule. get massive area reductions, right? Greg Brockman: You take components that humans have already optimized Greg Brockman: and just pour compute into it, Greg Brockman: and the model comes up with its own optimizations. Greg Brockman: And it's very interesting. Greg Brockman: We're at the point now where I don't think Greg Brockman: any of the optimizations we have Greg Brockman: are ones that human designers couldn't have come up with. Greg Brockman: Like usually our experts take a look at it later Greg Brockman: and say, yeah, like this was on my list, Greg Brockman: but it was like 20 things Greg Brockman: that would have taken them another month to get to. Greg Brockman: And that's actually really, really interesting Greg Brockman: that we were coming up on a deadline Greg Brockman: working with Charlie's team Greg Brockman: and we were running optimizations. Greg Brockman: We had a choice of, Greg Brockman: do we actually take a look Greg Brockman: at what those optimizations were Greg Brockman: or do we just keep going until the deadline Greg Brockman: and then take a look after? Greg Brockman: And we decided, of course, Greg Brockman: you got to just keep going. Greg Brockman: And so we've really been building up this expertise Greg Brockman: in-house to understand this domain. Greg Brockman: And that's something we actually think Greg Brockman: can help lift up the whole industry. Greg Brockman: But I think that we are heading to a world Greg Brockman: where AI intelligence is able to help humanity Greg Brockman: make new breakthroughs Greg Brockman: that just would not be possible otherwise. Greg Brockman: And we're going to need just as much compute Greg Brockman: as possible to power that. Greg Brockman: Like one example of something very concrete Greg Brockman: is that we are in a world now Greg Brockman: where ChatGPT is changing from something Greg Brockman: that you talk to interactively Greg Brockman: to something that can go do work for you behind the scenes. Greg Brockman: If you've used features like Pulse, Greg Brockman: You wake up every morning. Greg Brockman: It has some really interesting things Greg Brockman: that are related to what you're interested in. Greg Brockman: It's very personalized. Greg Brockman: And our intent is to turn ChatGPT Greg Brockman: into something that helps you achieve your goals. Greg Brockman: The thing is, we can only release this to the pro tier Greg Brockman: because that's the amount of compute that we have available. Greg Brockman: And ideally, everyone would have an agent Greg Brockman: that's running for them 24-7 behind the scenes, Greg Brockman: helping them achieve their goals. Greg Brockman: And so ideally, everyone has their own accelerator, Greg Brockman: has their own compute power that's just running constantly. Greg Brockman: And that means there's 10 billion humans. Greg Brockman: We are nowhere near being able to build 10 billion chips. Greg Brockman: And so there's a long way to go before we are able to saturate not just the demand, but what humanity really deserves. Andrew Mayne: So, Charlie, being very deeply technical and being with a company that's been at a number of forefronts of some of these revolutions, Andrew Mayne: what's it been like working with a company like OpenAI and working with Greg on this? Charlie Kawwas: So for us, it's been absolutely exciting and refreshing because the beauty of the work we do together is focus on a certain workload. Charlie Kawwas: We started actually first looking at the IP and AI accelerator, which is what we call the XPU. Charlie Kawwas: And then we realized very quickly that we now can actually go to the workload all the way down to the transistor. Charlie Kawwas: And as Greg was just explaining, how we can both work together to go customize that platform for your workload, resulting in the best platform in the world.

### [8:49](https://www.youtube.com/watch?v=qqAbVTFnfk8&t=529s) Historical context and future vision

Charlie Kawwas: Then we realized, as Sam was saying earlier on, it's not just that XPU or accelerator. Charlie Kawwas: Actually, it's the networking that needs to go to scale it up, scale it out, and scale it across. Charlie Kawwas: And so suddenly we started seeing that we actually can drive next level of standardization and openness that not just only benefits us. Charlie Kawwas: I think it actually will benefit the entire ecosystem and it gets Gen AI to an AGI much faster. Charlie Kawwas: So very excited about the technical capabilities of the teams we have. Charlie Kawwas: but also the vision and I think the speed at which we've been moving. Andrew Mayne: I'm still kind of wrapping my head around the scale of it Andrew Mayne: because it's just from both trying to design something like a chip Andrew Mayne: and to help figure out how you're going to get the maximum efficiency on this Andrew Mayne: to just the size of it, the infrastructure, what's involved in this. Andrew Mayne: This is a global effort. Andrew Mayne: And what comparisons have you been able to draw for this to other examples in history? Sam Altman: I always think the historical analogies are tough, Sam Altman: But this is, as far as I know, I don't know what fraction building the Great Wall was of global GDP at the time. Sam Altman: But a lot of ways that you would look at the AI infrastructure build out right now, you would say it's the biggest joint industrial project in human history. Sam Altman: And this is like, this requires a lot of companies, a lot of countries, a lot of industries to come together. Sam Altman: And a lot of stuff has to happen at the same time, and we've all got to kind of like invest together. Sam Altman: But at this point, given everything we see coming on the research front, given all of the value we see being created on the business front, I think the whole industry has decided this is like a very good bet to take. Sam Altman: But it is huge. Sam Altman: You go to one of these even one gigawatt data centers, and you look at the scale of what's happening there. Sam Altman: It's like a tiny city. Sam Altman: it's a big complex thing. So it is just like incredible skill. Greg Brockman: To the point of this being a massive collaborative project, I feel like whenever I call Charlie, he's in a different part of the Greg Brockman: world trying to secure capacity, trying to find a way to help us build what we're trying to do together. Charlie Kawwas: Exactly. Actually, one of the coolest thing actually I was thinking about is what we're Charlie Kawwas: doing together in this wonderful partnership. We're defining civilization's next generation Charlie Kawwas: operating system. And we're doing it, as you're saying, at the transistor level, building new Charlie Kawwas: fabs, building new manufacturing sites, all the way to building these racks, and ultimately the Charlie Kawwas: data centers you're talking about, 10 gigawatts of data centers. Yeah, I think it's an important Andrew Mayne: thing to keep track of, is often people get fixated just on the chips themselves. And it's Andrew Mayne: kind of like thinking the National Highway Project was about selling asphalt, or railroads are about Andrew Mayne: steel. In reality, it's the things become possible on top of that. And you've probably thought a lot about that? Like what happens? Hock Tan: Well, I think this is like railroad, internet. That's what I think Hock Tan: this is becoming over time, critical infrastructure or critical utility and more than just critical Hock Tan: utility for say 10,000 enterprises. This is critical utility over time, right Sam, for 8 billion people Hock Tan: globally. That's, I think, it's like the industrial revolution of a different sort coming

### [12:10](https://www.youtube.com/watch?v=qqAbVTFnfk8&t=730s) Role of compute in AI development

Hock Tan: But it cannot be done with just one party or we like to think it can be done with two. Hock Tan: But more than that, it needs a lot of partnerships. Hock Tan: It needs a collaboration across an ecosystem. Hock Tan: And also because of that, it's important to create, much as we say about developing chips for specific workloads, applications and LLM. Hock Tan: It also requires somewhat standards that are open, more transparent for all to use because you need to build up a whole infrastructure at the end of the day to become a critical utility for 6 billion people in the world. Hock Tan: And we're very excited, frankly, which is why we think we make great partners because I think we share the same conviction. Hock Tan: And more than that, it is about scaling computing to create breakthroughs in super intelligence and models. Hock Tan: It's building the foundation of that. Andrew Mayne: You guys have a lot on your plate. Andrew Mayne: Why design chips now? Greg Brockman: Well, you know, this project, we've probably been working on it for 18 months now, and it's moved incredibly quickly. Greg Brockman: We've hired some really amazing people. Greg Brockman: And I think what we found is that we have a deep understanding of the workload. Greg Brockman: And we work with a number of parties across the ecosystem. Greg Brockman: And there's a number of chips out there that I think are really incredible. Greg Brockman: And there's a niche for each one. Greg Brockman: And so we've really been looking for specific workloads that we feel are underserved. Greg Brockman: How can we build something that will be able to accelerate what's possible? Greg Brockman: And so I think that ability to say Greg Brockman: that we are able to do the full vertical integration Greg Brockman: for something we see coming, Greg Brockman: but it's hard for us to work through other partners, Greg Brockman: like that's a very clear use case for this kind of project. Hock Tan: Yeah, actually more than that, Hock Tan: and Greg, you put it very well. Hock Tan: Really why you want to do your chip is computing Hock Tan: is a big part of what's gating this journey Hock Tan: towards super intelligence, Hock Tan: towards creating better and better frontier model. Hock Tan: It really, a lot of it down to computing, and not just any computing, Hock Tan: computing that is effective, high performance, and efficient, Hock Tan: given especially on power. Hock Tan: And what Greg is saying is exactly what we learned and saw here. Hock Tan: For instance, if you want to train, you design chips that are much stronger Hock Tan: in computing capacity measured TFLOPs, as well as network, Hock Tan: because it's not just one chip that makes it happen. Hock Tan: It's a cluster, as Charlie put it. Hock Tan: But if you want to do inference, you put in more memory Hock Tan: and memory access relative to compute.

### [15:01](https://www.youtube.com/watch?v=qqAbVTFnfk8&t=901s) Optimizing for specific workloads

Hock Tan: So you are actually, over time, creating chips, Hock Tan: optimised for particular workloads, applications, as we go along. Hock Tan: And that, at the end of the day, is what will create the most effective models Hock Tan: is a platform that you want to create end-to-end. Greg Brockman: And also one piece of historical context Greg Brockman: is that when we started OpenAI, Greg Brockman: we didn't really have that much of a focus on compute. Greg Brockman: We felt that the path to AGI is really about ideas. Greg Brockman: It's really about tryouts and stuff. Greg Brockman: Eventually, we'll put the right conceptual pieces in place Greg Brockman: and then AGI. Greg Brockman: And about two years in, in 2017, Greg Brockman: the thing that we found Greg Brockman: was that we were getting the best results out of scale. Greg Brockman: It wasn't something we set out to prove. Greg Brockman: It was something we really discovered empirically Greg Brockman: because of everything else that didn't work nearly as well. Greg Brockman: And the first results were scaling up our reinforcement learning Greg Brockman: in the context of the video game Dota 2. Greg Brockman: Did you guys pay attention to the Dota 2 project back in the day? Greg Brockman: Yes. Greg Brockman: It was a super cool project. Greg Brockman: And we really saw you scale it by 2x, Greg Brockman: and suddenly your agent is 2x better. Greg Brockman: It's like, okay, we have to push this to the limit. Greg Brockman: And at that point, we started paying attention to the whole ecosystem. Greg Brockman: There were all sorts of chip startups with novel approaches Greg Brockman: that were very different from GPUs. Greg Brockman: And we started giving them a ton of feedback saying, Greg Brockman: here's where we think things are going. Greg Brockman: It needs to be models of this shape. Greg Brockman: And honestly, a lot of them just didn't listen to us, right? Greg Brockman: And so it's like very frustrating to be in this position Greg Brockman: where you say we see the direction the future should be going. Greg Brockman: We have no ability to really influence it Greg Brockman: besides sort of, you know, Greg Brockman: just like sort of trying to influence other people's roadmaps. Greg Brockman: And so by being able to take some of this in-house, Greg Brockman: we feel like we are able to actually realize that vision. Greg Brockman: And again, in a way that like we hope Greg Brockman: that we can show a direction Greg Brockman: and other people will fill in because the amount of compute required to bring our vision of AGI to the world, Greg Brockman: 10 gigawatts is not enough. Greg Brockman: That is a drop in the bucket compared to where we need to go. Sam Altman: It's a big drop... Andrew Mayne: The bucket's really big. Andrew Mayne: What becomes possible with this when you're building your own chips for inference and for training? Andrew Mayne: Where can you take this? Sam Altman: To zoom out a little bit, if you simplify what we do in this whole process Sam Altman: to, you know, melt sand, run energy through it Sam Altman: and get intelligence out the other end. Sam Altman: You're not literally melting sand. Sam Altman: Like it's a nice visual. Hock Tan: That's a good one. Charlie Kawwas: That's all we have to do. Hock Tan: I like that. Sam Altman: What we want is the most intelligence we can get Sam Altman: out of each unit of energy. Sam Altman: And because that will become the gate at some point. Sam Altman: And I hope what this whole process will show us, which is, you know, from the model we design to the chip to the rack, Sam Altman: we will be able to wring out so much more intelligence per watt. Sam Altman: And then everybody that's using these models in all of these incredible ways will do so much with it. Sam Altman: That's what I hope for.

### [18:02](https://www.youtube.com/watch?v=qqAbVTFnfk8&t=1082s) Journey towards AGI

Hock Tan: And you control your own destiny. Hock Tan: If you do your own chips, you control your destiny. Andrew Mayne: Yeah, it's interesting to think about how the things that we're doing today are pretty amazing, remarkable, Andrew Mayne: but we're using stuff that wasn't actually designed specifically for the way we're doing it. Sam Altman: Oh, I mean, the GPUs of today are incredible, incredible things. Sam Altman: I'm very grateful, and we will continue to really need a lot of those. Sam Altman: The flexibility and the ability to let us do fast research is amazing. Sam Altman: But you are right that as we get more and more confident in what the shape of the future is going to look like, a very optimized system to the workload will let us ring more out per watt. Sam Altman: That's great. Charlie Kawwas: And it's a long journey that takes decades. Charlie Kawwas: So if you go back to Hock's example, take railroads, it took about a century to roll it out as a critical infrastructure. Charlie Kawwas: If you take the Internet, it took about 30 years. Charlie Kawwas: This is not going to take five years. Charlie Kawwas: It's going to take a long time. Charlie Kawwas: So I think as we collectively, especially with this partnership, continue to figure out ways to wring out more tokens out of it, we'll discover that, oh, for this training or research, maybe a GPU is great. Charlie Kawwas: Or maybe, you know what, we can take whatever we're doing with Greg. Charlie Kawwas: It's actually a platform that allows you, like a Lego block, to take in things and out. Charlie Kawwas: And now suddenly we can get another XPU or an accelerator for next-gen that's targeted at a training or an inference or a research. Greg Brockman: Yeah, and to the point that Sam said of GPUs have really come an incredible way, Greg Brockman: in 2017 when we started looking at all these other accelerators, Greg Brockman: it was actually very non-obvious about what the landscape would look like in 5, 10 years. Greg Brockman: And I think it's really a testament to companies like NVIDIA AMD for how much the GPU has just moved forward and continued to be the dominant accelerator. Greg Brockman: But at the same time, there's a massive design space out there, right? Greg Brockman: And I think that what we see is workloads that are not served through existing platforms. Greg Brockman: And that's where that full vertical integration is something unique. Andrew Mayne: It's interesting to you because the idea that you'd want to put inferences close to the user is something kind of relatively new. Andrew Mayne: You know, we understood training, but then you think about like the number of people every day using these products and how much they need compute to do fun things or serious things. Andrew Mayne: And when you start thinking about kind of like the scale of it, like we talked before, I keep coming back to it's a very big thing. Andrew Mayne: Where, you know, where does it keep going? Andrew Mayne: Is it just a thing that we're going to continuously find new things to use compute for? Sam Altman: The first cluster OpenAI had, the first one that I can remember the energy size for, was 2 megawatts. Sam Altman: Adorable. Greg Brockman: Yeah. Greg Brockman: We got things done with those two.

### [21:00](https://www.youtube.com/watch?v=qqAbVTFnfk8&t=1260s) Future of AI and compute capacity

Sam Altman: I don't remember when we got to 20. Sam Altman: I 200. Sam Altman: You know, and we will finish this year a little bit over 2 gigawatts. Sam Altman: And these recent partnerships will take us close to 30. Sam Altman: the world has done far more than I thought they were going to do. Sam Altman: Turns out you can serve 10% of the world's population with ChatGPT Sam Altman: and do the research and do Sora and do our API and a few other things on 2 gigawatts. Sam Altman: But think about how much more the world would like to do than they get to do right now. Sam Altman: If we had 30 gigawatts today with today's quality of models, Sam Altman: I think you would still saturate that relatively quickly in terms of what people would do, Sam Altman: especially with the lower cost we'll be able to do with this. Sam Altman: But the thing we have learned again and again is, Sam Altman: let's say we can push GPT-6 to feel like, you know, 30 IQ points past GPT-5, something big. Sam Altman: And that it can work on problems not for a few hours, but for a few days, weeks, months, whatever. Sam Altman: The amount, and while we do that, we bring the cost per token down. Sam Altman: The amount of economic value and sort of surplus demand that happens each time we've been able to do that, Sam Altman: goes up a crazy amount. Sam Altman: So you can see, to pick a, I think, Sam Altman: well-known example at this point, Sam Altman: when ChatGPT could write a little bit of code, Sam Altman: people actually used it for that. Sam Altman: They would very painfully paste in their code Sam Altman: and wait and they would say, Sam Altman: do this for me and paste it back in and whatever. Sam Altman: And models couldn't do much, Sam Altman: but they could do a few things. Sam Altman: The models got better, the UX got better, Sam Altman: and now we have Codex. Sam Altman: Codex is growing unbelievably fast Sam Altman: and can now do a few hours of work Sam Altman: at a higher level of kind of capability. Sam Altman: And when that's possible, Sam Altman: the demand increase is crazy. Sam Altman: Maybe the next version of Codex Sam Altman: can do like a few days of work Sam Altman: at kind of one of the best engineer you know level, Sam Altman: or maybe that takes a few more versions, Sam Altman: whatever, it'll get there. Sam Altman: Think how much demand there will just be for that Sam Altman: and then do it for every knowledge work industry. Greg Brockman: And one way I like to think of it Greg Brockman: is that intelligence is the fundamental driver Greg Brockman: of economic growth, Greg Brockman: of increasing the standard of living for everyone. Greg Brockman: And what we're doing with AI Greg Brockman: is actually bringing more intelligence and amplifying the intelligence of everyone. Greg Brockman: And so as these models get better, I think everyone's going to become more productive. Greg Brockman: The output of what is possible is going to just be totally different from what exists today. Andrew Mayne: It's interesting, too, that going from a point with GPT-3, which was pretty cost, Andrew Mayne: you know, it was expensive comparatively to where you're at a level of a GPT-5 Andrew Mayne: and the fact that you can provide that freely to people. Andrew Mayne: And is that a motivating factor for you, the fact that every time you create these new efficiencies, that it just benefits so many more people? Andrew Mayne: Yes. Andrew Mayne: Absolutely.

### [23:50](https://www.youtube.com/watch?v=qqAbVTFnfk8&t=1430s) Wrap-up and future projects

Hock Tan: Absolutely. Hock Tan: And from our side on hardware, compute capacity, where to some extent, the rubber hits the road on this, it's really incumbent on us to keep optimizing, pushing the envelope on leading-edge technology. Hock Tan: And there's still room to go. room to go even on where we are Hock Tan: as we go from two nanometers going forward, Hock Tan: less smaller than two nanometers Hock Tan: as we start doing all kinds of different technology. Hock Tan: It is really great, exciting times, Hock Tan: especially for the hardware and the semiconductor industry. Sam Altman: What Broadcon has done here is really quite incredible. Sam Altman: It used to be extremely difficult for a company like ours Sam Altman: about making a competitive chip. Sam Altman: In fact, so hard we just wouldn't have done it. Sam Altman: And I think a lot of other companies Sam Altman: wouldn't have done it as well. Sam Altman: And all of these sort of, Sam Altman: this customized chip and system to a workload Sam Altman: just wouldn't be a thing in the world. Sam Altman: But the fact that they have pushed so hard and so well Sam Altman: on making it so that they can, Sam Altman: a company can partner with them Sam Altman: and they can do a miracle of technology chip quickly Sam Altman: and at scale, unfortunately do it Sam Altman: for all of our competitors too, Sam Altman: but hopefully our chip will be the best. Hock Tan: - Yes, of course. Sam Altman: It's really quite incredible. Greg Brockman: And I think also not just what they can do for us today, Greg Brockman: but looking at the upcoming roadmap, Greg Brockman: it's just so exciting the kinds of technologies Greg Brockman: that they're going to be able to bring to bear Greg Brockman: for us to be able to utilize. Hock Tan: Well, it's just the excitement of, I mean, Hock Tan: enabling joint and collaboratively models, Hock Tan: chat GPT-5, 6, 7, on and on. Hock Tan: And each of them will require a different chip, Hock Tan: a better chip, a more developed chip, advanced chip Hock Tan: that we haven't even begun to figure out how to get there. Hock Tan: But we will. Greg Brockman: And actually, the GPTs are definitely going to be an increasing part of that. Greg Brockman: Yes. Greg Brockman: It'll be very interesting. Charlie Kawwas: We're actually looking forward to that because my software engineers now Charlie Kawwas: already use that from a software point of view, Charlie Kawwas: and it's delivering efficiencies of dozens of engineers. Sam Altman: Really? Charlie Kawwas: Yes. Sam Altman: Great. Charlie Kawwas: On the hardware side, we're not there yet. Charlie Kawwas: But, you know, the good news. Charlie Kawwas: We'll get down very little. Sam Altman: We should talk. Charlie Kawwas: Yes, we should absolutely leverage this. Charlie Kawwas: But I was going to say with respect to compute. Charlie Kawwas: So when we started building these XPUs, Charlie Kawwas: you can maximum build a certain number of compute in 800 square millimeter. Charlie Kawwas: That's it. Charlie Kawwas: Now, today, we're actually working together to ship multiple of these Charlie Kawwas: in a two-dimensional space. Charlie Kawwas: The next thing we're talking about is stacking these into the same chip. Charlie Kawwas: So now we're actually going in the Y dimension or Z dimension, Charlie Kawwas: if you want to think three-dimensional. Charlie Kawwas: And then the last step we're actually also talking about Charlie Kawwas: is now we're going to bring optics into this, Charlie Kawwas: which is actually what we just announced, 100 terabits of switching Charlie Kawwas: with optics integrated all into the same chip. Charlie Kawwas: So these are sort of the technologies Charlie Kawwas: that will take compute, the size of the cluster, Charlie Kawwas: the total performance and wattage of the cluster Charlie Kawwas: to a whole new level. Charlie Kawwas: I think it will keep doubling at least every six to 12 months. Andrew Mayne: What kind of timeframe are we talking about? Andrew Mayne: When are we going to first start to see what's coming out of the relationship? Sam Altman: End of next year, and then we'll deploy very rapidly over the next three years. Sam Altman: Absolutely. Charlie Kawwas: Greg and I are talking about this at least once a week. Charlie Kawwas: We just had a chat earlier today on this. Charlie Kawwas: Yes, good progress today. Charlie Kawwas: Yes, exactly. Greg Brockman: But yeah, we're really excited to get Silicon back starting soon, actually. Greg Brockman: Yes, very soon. Greg Brockman: Yeah, I think that my view of this whole project is it's not easy, right? Greg Brockman: It's easy to just say, oh, yeah, 10 gigawatts. Greg Brockman: But like when you look at what is required to actually design a whole new chip and to actually deliver this at scale, get the whole thing working end to end, it's just an astronomical amount of work. Greg Brockman: And I would say that we're very serious. Greg Brockman: You know, our mission is to ensure that AGI benefits all of humanity. Greg Brockman: We're very serious about benefits everyone. Greg Brockman: Like we really want this to be a technology that is accessible to the whole world, that lifts up everyone. Greg Brockman: And you can really see that in trying to make the world be one of compute abundance. Greg Brockman: Because I think by default, we're heading towards one that is like quite compute scarce. Andrew Mayne: You ask my wife when she's trying to get more Sora credits, it feels very scarce. Greg Brockman: Yeah, no, no. We feel it so concretely. Teams within OpenAI, their output is just like a direct Greg Brockman: function of how much compute they get. And so that the amount of intensity on who gets the compute Greg Brockman: allocation is so extreme. And so I think that what we really want is to be a world where just if you Greg Brockman: You have an idea, you want to create, you want to go build something that you have the compute Greg Brockman: power behind you to make it happen. Andrew Mayne: Gentlemen, thank you very much for sharing this with us. Andrew Mayne: It's going to be very exciting to see where this goes, and I hope we can keep talking Andrew Mayne: about this as it continues to develop. Andrew Mayne: Thank you. Sam Altman: Thank you guys for the partnership. Hock Tan: Thank you. Hock Tan: Thank you for the partnership. Hock Tan: We're really enjoying it. Greg Brockman: We are too. Sam Altman: Yeah.

---
*Источник: https://ekstraktznaniy.ru/video/11207*