This is me face tof face with what could be the future of quantum computing. It was built by Sci Quantum, one of the most secretive quantum companies in the world, who just raised $1 billion to turn it into the first utility scale quantum computer on the planet. But rather than using super cooled atoms or exotic particles, this quantum computer runs on light. And I'm inside to find out exactly how it works. This is no longer your delicate research grade equipment that you might be used to seeing. It's a system designed to operate at massive factorywide scales. All maintained at an internal temperature of just 2 Kelvin, colder than the expanses of outer space. Now, trying to get an inside look at what is actually happening in such a secretive operation was no small task. In fact, the origins of Psych are equally shrouded in mystery as what they are actually building. But luckily, I've got an unfair advantage. Dumb luck. This is the building I did my PhD in as a young optical physicist. And this is where Sci Quantum came from. In fact, I did a PhD alongside quite a few of the team members there. But it still took me 5 years of asking for them to let me do this video. But now they've said yes, I can finally take you inside one of the hardest challenges humanity has ever set itself. What I want this video to be is an exploration of the deeper dive into each of the individual components that actually make that happen. Because that is where the engineering magic of building a quantum computer actually comes in. The singular goal of Scantum is to build the world's first utility scale quantum computer. And just to clarify, this video is not sponsored in any way. I just think this technology is kind of cool. No pun intended. So, let's dive in. To get us started, we need to do the obligatory section of any quantum video where we say ordinary computers operate on bits of 0 or one to do computation. Well, quantum computers use cubits that can be zero, one, or anywhere in between. But what does that really mean? What I really like about ScyQquantum is that you can actually visualize it because of how they build their cubits. Scyquantum's entire operation is built around this, a silicon chip, the same material already used in virtually all modern electronics. This is Dr. Ben Buridge, who is in charge of creating light at Scantum. — This is actually one of our first 300 mm wafers. So the idea of a wafer is essentially you have this uh large piece of silicon which is the kind of material that all of our um kind of normal computers uh run off of. Uh and you can use various lithography techniques to essentially um fabricate uh integrated circuits on these things. — By using the same lithography and etching techniques that underpin the semiconductor industry, they carve 100 nanometer ridges into the surface of these wafers called waveguides. These ridges operate just like optical fibers, guiding particles of light around the chip due to higher refractive index than the surrounding material. — The way that we operate is slightly different in that our integrated circuits aren't propagating electricity. They're propagating light through these things. Uh and so the way that we encode information is very different in that we're actually encoding information in the uh say the path of the photon. Uh so it actually matters uh where in the circuit the photons exist. In order to make a cubit, scantum splits one of these waveguides into two paths. If the photon travels down one path, they call it a zero state. And if it travels down the other, then that's a one. When the two waveguides are brought extremely close together, the photon has a chance to jump between the two paths with a measurable probability. Because we don't know which path it ends up actually taking until we measure it, we say the photon is existing in both paths at the same time. For a 50-50 beam splitter, there is a 50% chance of being in either path. And the nice thing about building a quantum computer out of light is that once this state is created, it can propagate for kilometers without being destroyed. Or as Mark Thompson, the CTO and co-founder of Scyquantum puts it, — they are massless relativistic particles that travel at the speed of light. And if you're a physicist, you'll know that something travels at the speed of light, it basically experiences no time. And so photons don't deco. And we actually know that experimentally. If you look at the photons that were left over from the big bang, we call the microwave background radiation. They're still polarized. They still hold the same polarization state that they were when they were created 14 billion years ago. So we know that the decoherence time of photons is at least the age of the universe. Whereas every other non-photonic cubit, every matter-based cubit system has a decoherence time that is incredibly short. you know, we're talking nanconds, maybe microsconds. Uh, and so that really limits the time in which you can do a computation. The other reason why they're really good is again because they're massless relative particles that travel at the speed of light. It's because they travel. Um, and if you want to build a system at scale, you're never going to put an entire quantum computer onto a single chip. We see that, you know, you see it with laptops. get even a laptop isn't a computer on a single chip these days but in principle
Segment 2 (05:00 - 10:00)
you can get a computer on a single chip but it's not very powerful right if you look at the latest supercomputers or AI data centers I mean they're now hundreds of thousands of chips connected together within a single system so the only way to make really powerful comput systems and to be able to continue to scale these systems is through interconnectivity you have to move information between systems and so photon are really the most natural way to move quantum information between systems. — But the thing is, this whole time we've been talking as if getting just a single photon into our system, getting it to propagate around our circuit was kind of a given. You'll be pleased to know, like every other step in building a quantum computer, this too is one of the hardest challenges that humanity has ever faced. So, how did they do it? The ability to generate a single particle of light is actually really a difficult thing to do. And we've spent the majority of our engineering effort in the early days of the company really figuring out how do you produce just this individual photon in a way that can be done with really high quality and with really high efficiency. Once you decide that information is encoded in which path light takes and where it ends up determines the answer of your calculation, you don't just need light moving through your waveguides. You need a regular stream of exactly one particle of light at a time so that individual photons don't change the probability of your calculations or outputs. This is the equivalent of needing a clock in a normal piece of electronics. Scantum solution is to use light to create light which has a nice ring to it. I asked the team to explain to me exactly how it works. And it turns out it was pretty complicated. So, I imagine it was somewhat annoying when I asked him to wipe it all off and talk me through it a second time. — I'll draw again to draw it again. — I don't want to imagine. We've had a practice run around this. — As a thank you to Ben, I'll drop his full explainer over on Patreon and the YouTube members area, but here's what I learned in a simplified and hopefully prettier explanation. No offense to your handwriting, Ben. They take short pulses of light at 1515 15 nmters called a pump laser and direct it into a series of ring resonator structures, circular waveguides whose circumference is an exact integer multiple of the laser's wavelength. This causes the laser light to tunnel to and circulate around the ring thousands of times each pulse, constructively interfering with itself on every single pass. As the laser light propagates, this electric field interacts with the electrons bound inside the silicon material, pushing and pulling on those electrons, causing them to oscillate. The confinement and energy intensity of that light is just so high, and it forces the oscillations into such a small volume that the electric field in silicon no longer responds linearly, known as a third order kai cubed nonlinear optical response. This is important though because it can spontaneously cause the energy of two photons from the laser to interact together and produce a new pair of photons through a process called spontaneous four-wave mixing. These two new photons are created together in a correlated quantum state. One slightly higher in energy and one slightly lower in energy, which is why we refer to them as blue and red photons, respectively. These new photons couple back into the waveguide along with the bulk laser light that wasn't caught in the resonator. But before allowing the light to propagate down the world line into the rest of the chip, a reflection filter is used to reject the laser light and the red light so that only the signal blue photon moves deeper into the chip. Each time this pulse laser fires, it marks the start of a single clock cycle, defining a precise time window on the order of picosconds. But obviously, it's not quite that simple. The probability for full-wave mixing to actually occur and produce this photon pair is extremely low. Roughly one event in every million photons. And this would make for a very irregular and very slow clock tick through our system. And as a result, very slow calculations. So immediately we find ourselves needing a better clock. But if you actually just um cascade these devices so you have lots of them on a single chip, then you can do something pretty special. To start to solve this, SCI quantum repeats the design many times so that these resonators run in parallel to increase the probability that a photon is actually produced. The idea would be to find which resonator track produces a photon and move that path and only that path onto the world line so that we guarantee that only one photon moves through the system at a time. By default, each photon generation track ends in a dead end so that multiple photons don't accidentally enter the world line too close together and interfere with the calculation. But there is a bit of a catch here. To know exactly which track produced a photon would require actually detecting that photon and detecting a photon would destroy it. This is the classic catch 22 of quantum mechanics is so annoying and confusing. You only know a particle's state by detecting it, but detecting it destroys it. So what do you do? Scyquantum's solution comes from what
Segment 3 (10:00 - 15:00)
was previously a bit of an inconvenience. The fact that full wave mixing actually produces two photons rather than one. Their solution is rather than filtering out and discarding this second photon. If you can detect it, it tells us that source line has produced a signal photon without actually needing to detect and destroy that signal photon. Then we know to switch that source line onto the world line so that we can propagate it through the quantum computer. We call this second photon the sacrificial photon or after an emergency internal PR meeting the heralding photon which has a bit more of an optimistic skew about it. The arrival of this heralding photon signals that a photon of interest has been generated. But detecting the arrival of just a single photon of light, it turns out is a pretty significant challenge. So how did they do it? We'll cover that. But first I need to tell you about this week's sponsor, Short Form. We all want to learn more than we actually have time for. My reading list is longer than my lifespan, and that's a problem Short Form is built to help solve. Short Form makes deep human written guides to the most important non-fiction books, not just summaries, but actual analysis. You get a one-page overview if you want the big picture, then chapter by chapter breakdowns, commentary, counterpoints, and even exercises, so the ideas actually stick. What I really like is that it doesn't stop at what the author says. It explains why the idea actually works, where it breaks, and how to connect it to other research and books, which shockingly is exactly how learning works. I mostly use short form for science, psychology, and productivity, especially books that I've already read, but I want to actually remember. They're constantly releasing new guides every week, and subscribers even get to vote on what gets covered next. They've also expanded way beyond books. There are articles and podcast guides, master guides that synthesize entire fields, and browser extensions that summarize articles, emails, and even YouTube videos with one click. It's basically a compression algorithm for the internet, but with editors. If you want to try it, go to shortform. com/dben. You'll get a free trial and $50 off the annual plan if you sign up through that link. Thank you to Shaw form for supporting the channel. Now, back to the video. — HAIL MESSIAH. I'M NOT THE MESSIAH. — To detect such a tiny disturbance as the arrival of just a single photon, you need a detector that is balanced on a knife's edge between two physical states. That is exactly the detector that Scantum had to build. — Apparently, the best way that humans know how to efficiently detect single photons is to use superconducting nanowire single photon detectors. you have a material that's sitting just below temperature where it's a superconducting material and then um the way the detector works is the the warmth brought to that material by just a single photon is enough to stop the material from being superc conduct anymore and so that gives you a macroscopic event that you can detect and say well there must been a single photon that has been absorbed by that nanowire. Each one is a nanowire of nobbium nitride just 100 nanometers wide. When cooled below its critical temperature, the material enters a superconducting state where electrical resistance drops to zero. In this state, electrons no longer move independently, but instead they bind into pairs known as Koopa pairs linked through subtle interactions with the crystal latice. In order to act as a detector, the nanowire is biased with a small current running through it set just below the maximum current that would force this wire out of its superconducting behavior. In this state, it only takes something as small as a single photon hitting this nanowire to transfer enough energy to it and collapse in a very small area some of these Koopa pairs. As this small population of Koopa pairs collapses, superconductivity breaks and a small section of the wire becomes resistive, which produces a sudden and short measurable voltage pulse of a few millolts. That pulse is the detector's click, signaling a single photon has arrived in your circuit. And this whole process takes about 1 nanocond, 1 billionth of a second, and then the wire returns into its cooled superconducting state, ready for the next detection. While this detector physics was known of previously, to run a fault tolerant quantum computer, these detectors need to integrate reliably into silicon photonic chips and be aligned with the waveguides with nanometer precision and made to behave pretty much identically across thousands of devices while detecting basically every single photon that arrives because if they miss one, that is an entire calculation that fails. The team told me that their detectors have already exceeded about 93% efficiency on chip with the next devices that they are developing producing about 99%. Actually, even beyond this near-perfect level of performance will be needed to achieve the full vision of the reliable quantum computing approach that they are actually trying to build towards. And now I'm going to slightly gloss over quicker than it probably deserves, but the fact that all of these photons are moving about the speed of light, the ability to detect the herald photon, generate an electronic signal, which then propagates and activates a gate to switch the right source path onto the world line. And to do so faster than the signal photon reaches the end of the source path, is a small miracle in its own right. But to build that device, they would need a component capable of switching millions of times faster than anything else on the planet. And to do that they would need to develop an entirely new material.
Segment 4 (15:00 - 20:00)
There has been one such candidate that has long interested scientists for this capability. Barryium titanite oxide or BTO which allows us to essentially change the refractive index of the material by applying a voltage across it — making it one of a class of materials called electrooptics. BTO is a ferro electric crystal, meaning its atoms naturally arrange themselves so that positive and negative charges are slightly offset. That internal asymmetry creates a built-in electric polarization. When you apply an external voltage, the atomic positions within shift just a little and the internal electric field of the material changes with them, changing the refractive index. Increasing the electric field increases the refractive index and so decreases the speed of a photon traveling through this material. a phenomenon known as poles effect. And this refractive index change can cause a photon to move from one path to another. The amazing thing about VTO is that these phase adjustments can be made unbelievably quickly, locally, and repeatedly, even up to the capability of operating at gigahertz speeds, making it ideal for interface with modern electronics and controlling a particle as fast as light. But as with all things when it comes to building a quantum computer, making it actually work is the hard part. Getting BTO to grow and interface with silicon, which is the rest of cyquantum's architecture, has taken over a decade of problem solving. The team showed me around their customuilt facility needed to actually grow BTO and how they've perfected its growth and what happens when it goes wrong. The underlying problem is that silicon has a tightly ordered crystal latice with pretty consistent thermal properties. BTO, by contrast, has a more disordered structure. So if you try and grow BTO directly onto silicon, this mismatch creates mechanical strain causing the crystal to crack or form defects and the material can even lose its electrooptic properties as a result. And keep in mind, these devices need to survive the transition from room temperature down to about 2° Kelvin. So that is about as an extreme of a temperature journey as it is possible to have a piece of electronics ever endure. Early attempts in the broader research community had shown that it was possible to do this in principle, but not at the scale sciquantum needed. To make it actually work, cyquantum had to develop a layering process that gradually transitioned from silicon to oxide and finally to BTO. And whilst there is logic to all of those crystal formation steps, it is also a bit of a dark art. So it required thousands of iterations and essential brute forcing to get the recipe just right. Within their manufacturer process, individual layers of atoms are deposited essentially one by one, slowly controlling the temperature and purging everything that isn't exactly the atom that should be in exactly the right place at the right time. You need to switch this circuit on the order of the speed of light. I guess you know this this photon has to arrive from this circuit to whatever circuit it needs to get to. Uh and so obviously you can delay it by sending it through optical fiber and waiting for it. Um, but essentially you only want to do that for the minimum amount of time that takes your signal to propagate from one part of the chip to the other so that you can then switch that circuit. — With the help of a small signal line delay and a very lucky coincidence for Scantum that they have access to BTO, the fastest switching material that humanity has ever produced, doing this somehow becomes possible, which they demonstrate in this paper, which I'll leave in the description down below if you want to read up a little bit more about it. But now that we can generate light and get it into our system, the question is what do we do with it? How do we do math with a particle of light? On its own, a 50-50 beam splitter isn't particularly useful as that's kind of a boring result. To do meaningful computation, we need to create probability distributions as we see fit and to control them dynamically during computation. The backbone of photonic engineering that is actually making this happen is the Maxander interpherometer, a combination of two 50/50 beam splitters connecting two beam paths. The first beam splitter creates the normal 50/50 distribution, placing the photon in superp position of being in both paths at the same time. And if we let that cubit continue forward to the second 50/50 beam splitter, this distribution will remain unchanged and kind of boring. However, if one half of the photon is slightly delayed along its path, when it recombines at the second beam splitter, the probability of it ending up in either path will start to change. We can actually control this effect incredibly precisely by adding a heating element to one of the paths and increasing its temperature. As the temperature of material like silicon increases, so does its refractive index. This is called the thermoptic effect. As the refractive index increases, the speed of light in that material slows down, introducing what we call a phase delay between the two halves of the photon. And yes, we are ignoring here that photons can't actually be split in half. Welcome to quantum mechanics. Now that becomes interesting to us when we scale that up from one Maxander intererometer to a mesh of many of them. That gives you a programmable optical circuit that can implement an arbitrary sequence of these mixes. Essentially the photonic analog of gates in a classical computer. Actually using heat couplers to introduce this phase delay introduces a major problem. Heating and cooling silicon takes micros secondsonds to milliseconds which is an eternity when
Segment 5 (20:00 - 25:00)
your information carrier is a photon moving at nearly the speed of light. This switching time limits the number of calculations that you can perform per second, which kind of defeats the whole point of using the universe's fastest particles. Also, trying to keep that heat where you actually want it is a major problem. Warming one wave guide creates thermal cross talk between neighboring components, unintentionally introducing interactions between different interferometers. And these small thermal drifts can accumulate and break your control over your entire computation. During my PhD, I remember many a PhD student venting over just how delicate and breakable early versions of these quantum chips actually were. The good news for scantum is that same BTO material that enabled them to do switching in the photonic generation circuit also allows them to control these cubits. And by integrating it directly into their Maxander interpherometer circuits, it allows Scantum to dynamically reprogram the interface, slowing down and rerooting individual photon paths and performing linear algebra operations that make quantum algorithms actually possible. At the end of the computation, all paths end at a detector. And although the photon passes through all paths at once, it is only actually detected at a single location. The likelihood of which one depends on the probability distribution. Each detector reports a binary outcome of one if a photon is detected or zero. And to get a final result, the computation is repeated many times, building up a distribution of outcomes to identify the answer as the result that consistently dominates across repeated measurements. Because these detectors are so incredibly delicate and rely on superconducting for detection, the entire system needs to run contained in a radiation shielded infrastructure cooled down to just 2 Kelvin, a temperature at which even atoms are barely vibrating. Achieving this, as expected, is reasonably hard. These are the cryostats quantum are building to keep their technologies cooled. They are 4 m tall full steel devices the size of massive walk-in fridges. Internally, the team are calling them the MK is even if Mark vehemently denies that they are actually named after him. — It's not named after me. — It's hard to believe. — But at the heart of each cryostat, the chips themselves are mounted on these vertical blades and held under ultra high vacuum. That vacuum removes air as a pathway for heat. And so no atoms can accidentally bump the sensitive nanowire detectors that might otherwise indicate a photon has arrived. Around this system are multiple layers of thermal and electromagnetic shielding, intercepting heat and light energy that leaks in for the surrounding rest of the universe and the room temperature environment around it. Keeping these blades at 2 Kelvin requires continuous cooling. Liquid helium is stored in a reservoir at the base of the kryostat and circulated upwards through the system. As it flows past the blades, it absorbs heat and carries it away, maintaining these detectors at their optimum temperature. But that helium obviously can't circulate indefinitely. After just a few minutes, it must be fully removed, recondensed, and recooled in the large external cryogenic tanks before being fed back into the system. And Syquantum has an entire team managing just the cooling process of all of this equipment. The good news is that whilst 2 Kelvin is unimaginably cold to most of us, it is about 100 times warmer than the cryogenic temperatures demanded by other superconducting quantum computers. Other approaches typically operate around 20 millichelvin. And as you push closer to that absolute zero mark, removing heat becomes exponentially harder. Even just a few microwatts of excess heat can easily overwhelm a system that's at 20 millichel. So actually operating at a barmy 2 Kelvin gives the scantum infrastructure significantly more robustness compared to other approaches. The fact that it's actually their detectors that are the limiting factor means that slowly over time with material improvements they might be able to increase this temperature further. Internally they're already targeting 4 Kelvin as the next step for actually rolling out these systems in a mass scale. This also means that the amount of energy required to run these systems on mass is significantly lower. And that means that they can start thinking about building millions of cubits, not just dozens. Moving them into a different league of problems. But one limitation of psychantum's approach is that each chip can only hold so much circuitry and each MK2 many chips. So in order to truly reach utility scale quantum computing, you need to be able to move one quantum state from one chip to another without destroying it. So there is one last major leap of genius to go — which is our project quantum leap. — When we talked about cubits we said the information here is stored not in the light itself but in the precise phase relationship between two different parts of the photon's quantum state. We also said that it is very sensitive to small differences in those two paths as it propagates through the system. So the way it was explained to me is if we were going to take both of those cubit paths and move them onto their own optical fibers and take both of those optical fibers and connect them to a new chip, even just small disturbances, temperature changes, fiber length differences, or just mechanical vibrations could shift the delicate
Segment 6 (25:00 - 30:00)
phase relationship between those paths and destroy the cubit. to our final leap of genius, at least the one we'll cover within this video, the SC quantum team, take their quantum state that is encoded in space and choose instead to encode it in time, which sounds like I've transitioned to talking in pure sci-fi. In fact, this is the minute they started explaining that to me and then gave me no further explanation, as if that is just a reasonable thing to say to another human being and assume they all understand it. Fortunately, the principle of how this actually works isn't too complicated to get across. At the spaceto time converter circuit classic nam and convention the incoming photon is split in a beam splitter. One of the paths is deliberately longer than the other by around one nancond. That might sound negligible but for light that corresponds to tens of centimeters of optical path length which is kind of enormous on a chip scale. The result is these long spiraling pathways that slow one path of the light down. These two paths are then recombined onto a single world line. The result is no longer a cubit encoded in two spatial paths, but instead moments in time. You know, the kind of like phase oscillations that happen inside an optical fiber don't happen at a gigahertz. You know, that's sort of far beyond the speed signed in a fiber. And so uh this is something that is sort of just by nature of being so close in time any kind of distortions that would happen because of you know thermal effects or anything else in the fiber um they don't affect the coherence of that cubit. And so when that cubit arrives um and gets decoded by just simply time reversing the circuit you end up with uh exactly what you got what you put in comes back out the other end. And so we've done this experiment and measured far in excess of 99% fidelity. And as a result, things like temperature changes, fiber length drifts, or other environmental changes, because they are so slow compared to the speed of light traveling through a single fiber, they affect both halves of the photon equally, keeping the cubit intact, a capability almost impossible in all other quantum computing approaches. When this time cubit then arrives at the next chip, the process is reversed and another interferometric circuit is used to convert the time bin encoded cubit back into two spatial pathways, allowing the photon to re-enter the normal interferometric network and propagate as a normal spatial photon. You might be dubious as I was that everything I just said is largely made up. So is there proof this actually works? The million cubit question literally is does this platform actually scale? And as a first pass stress test to answer that question, ScyQquantum ran one of the most ambitious quantum tunneling experiments I have ever seen. And when I say quantum tunneling, I mean, yes, literally. Here I am entering one end of a quantum tunnel, a YouTube first. Two chips, each housed inside their own separate SI cube, were connected by 250 m of standard single optical fiber running through a tunnel beneath two of their labs. This experiment was the culmination of decades of work and required all of the components that we have seen operating today. So, let's walk through what actually happened. On the first chip, a highintensity laser pulse interacts with a series of ring resonators, producing a pair of photons, one red, one blue. These travel into a reflection filter circuit where the red photon is directed onto a single photon detector. The detection of this heralding photon tells the system that a photon has been generated and sends a signal to switch a BTO circuit to connect the source line onto the world line. As the single signal photon propagates along the world line, it is split by an interferometric circuit into a cubit state spatially encoded. So that one half of the photon exists on each path producing a cubit. That cubit is sent to a space-to- time conversion circuit. The first half is coupled directly onto a single optical fiber while the second half is sent through a 1 ncond delay line and then coupled onto the same fiber. This converts the spatially encoded cubit into a time encoded cubit. This time encoded cubit travels the length of about three football fields in 800 nanconds before arriving at the second circuit. Here it is re-encoded from a time cubit into a spatial cubit before each path is directed into a single supercooled nanowire detector. In the simplified setup of the experiment, the key thing was that the probability distribution remained exactly 50%. Indicating that the phase relationship of this photon hadn't changed and that the cubit was unaffected. The actual results from Sciquantum's paper were better than just promising. Across thousands of runs, they recovered quantum state matched the original with 99. 7% fidelity. That number is the reason why SYQ is one of the most promising quantum computing companies in existence right now. It means that quantum computing doesn't have to live on a single chip or even inside a single cryostat. you know, one module that we'll fill with thousands and thousands of our cubic generators. And then because of the unique properties of photons and the quantum mechanical properties of photons, we can put them through standard optical fiber and connect them together almost in the same way that your house is connected to the, you know, the telecommunication exchange. We use the same fiber optics
Segment 7 (30:00 - 32:00)
but we can use them to connect systems together. Uh, and so the way that we're looking to scale up our system is just in the same way that you scale up any data center. It's just this distributed motorized system where you photonically and electronically connect your modules together. Each MK2 can hold many photonic chips. Some dedicated to generating photons, others performing long algorithmic computations and others still detecting results using superconducting nanowires. These modules can then be optically together forming one large utility scale quantum computer distributed across an entire facility. Site Quantum has already announced two factory scale quantum computer buildouts, one in Australia and one in Chicago. Each site is planned to house hundreds of MK2 cryostats with potentially hundreds of quantum photonic chips inside each one. This is exciting because this to me is that transition state between where this technology was very much research grade in its capability to where we are actually at the edge of rolling out something that may be the first true quantum computer. I've been very lucky to be at the absolute periphery of this whole process and I remember the guys that I met during the PhD watching them spend hundreds of hours hand gluing optical components onto circuits and then spending even more trying and often failing to get them to actually work. Watching the physics arts and crafts project that this once was turn into what genuinely is one of the most compelling companies in quantum computing in the world is kind of unreal. And yes, it has taken a decade and may take another decade before something truly worldchanging comes as a result of it. But I hope that by shedding some light on the scale of these problems that they are facing and just how difficult the engineering challenges underneath them actually are, you can see why sometimes these things just take a little bit of time. There's actually a ton of stuff that I still could not fit into this video despite how long it now is. If you would like to listen to it and the full conversations, including with Professor Mark Thompson, the CTO and co-founder of Psyquantum, you can find them in our members area over at patreon. com/dbarmiles, along with a bunch of other cool stuff. If you want to support the channel and the absolutely amazing team that helps me put this sort of content together, I would love your support over there. A huge thank you to Scantum for having me, explaining everything to me so patiently. And as always, thank you to you guys for watching. I'll see you next time. Goodbye.