# Dramatically improve microscope resolution with an LED array and Fourier Ptychography

## Метаданные

- **Канал:** Applied Science
- **YouTube:** https://www.youtube.com/watch?v=9KJLWwbs_cQ

## Содержание

### [0:00](https://www.youtube.com/watch?v=9KJLWwbs_cQ) Segment 1 (00:00 - 05:00)

today on applied science I'd like to show you this technique of improving the resolution of a microscope by adding an LED array where the normal illumination goes and then capturing hundreds of images one for each LED and combining them computationally to make a result that is dramatically higher resolution than the input images this is crazy right it looks like a completely different image but this is actually a valid technique even though it does appear to be violating the physical resolution limit of the microscope objective so let me show you how this works this very clever technique was developed first at Caltech within the last 10 or 15 years and most of the heavy lifting is done in the algorithm and the physical setup of capturing images is pretty straightforward but there are a couple of important things to point out here the LED array really should approximate Point sources of light so I originally tried making it work with this larger 2812 array of LEDs and this does not work cuz these LEDs are just physically too large so I ended up using a so-called Hub 7 5 LED array and these are really small LEDs with no smarts in them so it's basically just a bunch of shift registers as opposed to an LED Matrix like this where each uh LED has a processor inside it I'm using a pitch of 2 mm but really the ideal pitch for this setup is about 4 mm so I'm just using every other LED this whole process works best with a 4X or a 2X microscope objective and if your objective has an infinity sign on it that means that the optical aerations of that objective are best corrected when the objective has Infinity space behind it other words your camera is infinity far away on this side but since that's pretty impractical we got to add another lens called a tube lens to focus down this Infinity Light down to the focal plane of the camera if your microscope objective has a number here like 160 that means you do not need a tube lens in fact it works better to have the focal plane 16 mm behind the microscope objective when I was initially putting together YouTube video ideas about this whole technique one of the hooks in my mind I was thinking of was you know turn a cheap microscope into a really expensive microscope with this one weird trick but as it turns out that's actually not accurate because you need a very highquality Imaging system to make this technique work since we're extracting so much information from the images to come up with this resolution boost if you use cheap Optics that have a lot of aberration this doesn't work at all you actually need pretty high quality Optics I'm controlling the hub75 LED array with a Teensy microcontroller and I also use the microcontroller to control the shutter release for the camera it's just this little relay here and so the way I do this is just hardcode everything into the Arduino program and then download the program which starts the whole acquisition process of stepping through the LEDs and acquiring an image for each and I'll uh upload all the stuff to my GitHub including the Reconstruction algorithms and everything it's all MIT license and you can play with it to your heart's content I acquired all of the images using a Lumix g85 which is actually what you're looking through right now this is a gh1 just standing in for it but as we'll find out this is actually a big problem uh the issue is that there's a color filter in this camera and that uh poses a really big problem for reconstructing these images which we'll get to later so if you're going to attempt to um try this technique out for yourself you've got to get a monochrome camera something that doesn't have a color filter array which is actually not so easy these days before we get to the computer to process our images I want to show you visually how this technique works because it's just so clever so I have a laser pointer here and if we put a defraction grading in there you know it's going to happen the light defract a primary beam goes through and then we have these defract beams this is 600 lines per millimeter and if we go up in op in density 1,000 lines per millimeter the angle of defraction is much of course and if we had an unknown grading we could put that here and measure the angle of defraction and figure out how many lines per millimeter it had okay that all makes sense but let's say we wanted to get an image of this defraction grading like we want a microscopic image of those lines per millimeter so we get a lens and put that here and when we put our grading in front of it everything seems fine all the light is going into the lens and we can focus it down and get an image of that defraction GR but look what happens if we go back to the really high density grading the defract beams miss the lens entirely so there's no way that the information from this defraction grading about how many lines per millimeter it has can get into that lens because the angle of defraction is greater than the light cone that this lens is accepting so it's kind of a visual way to figure out that there's no way that this Imaging system can see how many lines per millimeter this has because clearly

### [5:00](https://www.youtube.com/watch?v=9KJLWwbs_cQ&t=300s) Segment 2 (05:00 - 10:00)

the light is missing the lens system entirely now if we move this closer to the lens in other words if the lens were bigger in diameter or if it had a shorter focal length then the light would go into the optical system and we could have a chance at Imaging it in other words if the lens had a higher numerical aperture then the resolution limit would be higher because it can accept more of these highly defract beams of light into the optical system to get focused down to make the image pretty cool right that's the fun fundamental basis for a defraction limit ed Optics meaning how much of a defraction pattern it can accept to get higher resolution now you might be saying well sure this works for defraction gradings but what about you know a real slide that has an actual sample in there I mean that's not a defraction grading well actually it kind of is I mean light diffracts whether you want it to or not and so even very complicated images like a you know animal cell or plant cell or whatever you're looking at with your microscope all of the same features that cause defraction in a grading are also in a real sample now you might be able to guess what this technique is going to do remember we're shining our beam in like this and the defract light is missing our optic so how do we get it in there what if we turned the laser pointer by a known Angle now look what's happening the primary beam is missing our lens but the defract beam is going right into it and getting focused down quite clever isn't it in fact we can capture images all along these different angles and uh be able to determine the number of lines per millimeter in our unknown grading just based on how far we've rotated our light source around and figuring out if there's any light coming through there and remember a real slide that has you know animal or plant cells or whatever is just a very complicated defraction grading that has all number of different lines per millimeter so as we're sort of sweeping through all these different angles we're capturing different uh types of spatial density different um spatial frequencies for every one of these angles and so this whole technique of foer tygra is capturing all these different spatial frequencies using a small lens and then putting them all back together so it's basically like having a huge lens because we've swept through all the different angles of possible defraction and remember this is happening in two Dimensions so we're sweeping vertically and we're sweeping horizontally and we're putting all of this together as if our lens were enormous the trick is how do we put all this together computationally that's why it took quite a while to come up with this technique and make it work it's not so trivial to capture all those images with all the different spatial information that they contain and put them together into one final output image we can actually see this work with real Optics too so I have a 2X microscope objective here the laser pointer and a screen down there and if we put a fairly low density defraction grading in here um it resolves the lines because the pitch of this is coarse enough where this Optical system can resolve it no problem it's pretty obvious what's going on there now if we go to a high density Optical grading that this system has no chance of Imaging because we can see the diffracted beam is missing the optic entirely we can go down there and magnify the image as much as we want and we never see any lines per millimeter because again the information is just missing so then we'll do our sneaky technique and rotate this laser pointer until the defract beam is going through now the primary beam is hitting here and the defract beam is going into the Optics so if we go down there and amplify that magnify that image are we going to see any lines per millimeter no actually because this Optical system can't possibly image 1,000 lines per Mill the fact that we're seeing light down there just means that there's a region of the image that is diffracting and actually the information is the angle that we're shooting the light into the grading so imagine it this way instead of just a flat grading here imagine that this were a checker board a tiny little checker board and all the white squares are 1 th000 lines per millimeter and all the black squares are 600 lines per millimeter so what's going to happen is when we rotate through these angles there'll be one angle at which all of the black squares are activated I mean they're diffracting light and sending it into our Optical system so we'll end up with all of the black squares being illuminated over there then when we get to the magic angle for a th000 lines per millimeter all the white squares will be activated and we'll see that so the image that we're getting tells us where in the image that spatial frequency is happening doesn't tell us exactly what spatial frequency it is because we're getting that information from the angle

### [10:00](https://www.youtube.com/watch?v=9KJLWwbs_cQ&t=600s) Segment 3 (10:00 - 15:00)

so you can see how this would scale up to an actual slide of you know animal or plant cells it's basically just a very complicated checker board where different parts of the image have different spatial frequencies and we collect all those spatial frequencies by changing our illumination angle so for every pixel in the image we end up with several hundred spatial frequencies and we look through all those spatial frequencies to figure out U what the actual features are in that region of the image if we move to actual images collected with this setup you can see this working with the Air Force Target the resolution Target if we start off with Central illumination of course that's called bright field because we're looking right at the illumination Source but now if we move the angle of Illumination off to the side you can see each region of the resolution Target lighting up in turn as we get into more and more extreme angles and higher spatial frequencies and so it's really it almost seems so simple that it's never going to work but really the technique does scale up to very complicated images and we can extract a huge amount of detail by collecting just a few hundred spatial frequencies let's talk about the image processing pipeline so we've got a whole bunch of images from the digital camera and they're all raw uh we can't shoot in jpeg because the compression would destroy this whole technique and again if you want to try this don't do what I did because collecting images from a modern U DSLR camera that's color um makes this not work and I'll show you why the um you know modern color cameras have this color filter over the image sensor and so we're going to illuminate the scene with a monochromatic LED let's just say red LEDs so it seems like it would work to just use the red sensors only and just you know take only the red Channel but this doesn't work because what we're doing is subsampling the image and if we interpolate between red pixels it's possible that we're going to make an incorrect assumption remember this technique is super sensitive to every little Nuance of the image movements of you know hundreds of microns of the of different angles and Camera illumination you know a pixel being off by just a few values it's really very sensitive and so I tried my best to use a single Channel and it doesn't work it will not reconstruct the image at all so I came up with this other idea as it turns out the red green and blue color filters in my camera overlap quite significantly and if I illuminate the scene with a green LED there's actually a little bit of information that comes through um all of the channels so in this scene you can see there's a little bit of red a little bit of blue and some green and as it turns out what we can do is add a correction factor to the red the blue and the green channels so that when we put them all back together we end up with a black and white image that is really high resolution because we're basically using all of those sub pixels and trying to adjust for the different sensitivities it does seem a little weird a green LED actually registers in both the red and blue channels on the camera but it's true it's not quite as sensitive you can see there's a pretty big difference but it does work and it it works well enough to basically allow this technique to function on certain images like that Air Force Target the resolution Target works but this tissue sample does not work it's too complicated I could not get this to reconstruct so again if you're going to try this you've got to get a whole of a scientific seos camera that has a monochrome image sensor so anyway so I used this program called raw therapy to take the raw images from the camera apply this color correction and Deb it and save these as Tiff images for import into the octave script which does all the work and I didn't write all of this came from the Waller Lab at UC Berkeley and credit where credit do uh these folks published their work um with the MIT license and put it all up on GitHub which allowed me to make this whole video so super big thank you to them uh of course the code didn't work when I downloaded it but hey you know you get what you pay for and um I made a bunch of tweaks to it and re-uploaded it on my GitHub and you're welcome to pick through that hopefully it works when you download mine but you never know the program must know about the optical setup with extreme detail obviously it needs to know the wavelength of the light dimensions of the image it needs to know the numerical aperture of the objective that you're using so it knows what the light cone is that the objective can capture it needs to know the magnification of the setup the size of the pixels on the image plane this NP is basically how big of a reconstruction you want to make one of the limitations of this implementation is that it can't improve the resolution of your input image edgo Edge it has to use a subset so 500 pixels is going to be the subset from this 3,000 pixel Square image uh we need to know the spacing between the LED s distance from the

### [15:00](https://www.youtube.com/watch?v=9KJLWwbs_cQ&t=900s) Segment 4 (15:00 - 20:00)

LEDs to the image plane to figure out the angle remember what this thing really needs to figure out is the illumination angle because that tells it what spatial frequency to assign to that illumination angle so I added um this way of keeping track of which LEDs need to be illuminated and I came up with this idea of making a text file that has a c array like an array defined in the language C that we can pump into the Arduino script and then it can cycle through those to light up all the LEDs so it prints out this array that you can copy and paste into your C code into the Arduino C code this is where we start loading the images pretty straightforward there are a bunch of commented lines which take into account different types of images One limitation is that raw therapy can't produce mono images they're all color even if you made a black and white image in raw therapy it just stores the same value for red green and blue so to make things easier this thing can just pick out one channel if you want a lot of this work is basically setting up the Matrix that tells the system what spatial frequency it gets from each one of these acquired images and just keeping track of all these different matrices is a huge amount of work there's like a ton of off by one style errors where it's you know zero index or one index or row major or column major left flip up down mirrored you know left right it's just it's a ton of stuff to keep track of especially when you're coordinating between this and the firmware code in the real world um it helps a lot to just come up with like test lighting schemes that start off in one corner and go to the other corner to make sure that all the lefts and rights line up I noticed a couple notes from the original author of this code that said D noising is important and so I experimented with different settings in raw therapy to denoise the image either not at all or actually to a very high extent and I didn't notice a huge amount of difference there's also a small amount of code in here that attempts to subtract like the background amount of signal from the image sensor and I think this is because the image sensor that they were using is much raer than even the raw from a consumer camera so in other words it backgrounds at you know several hundred counts um whereas a consumer camera really does get pretty close to zero at least with the exposures that I was doing so this part could actually be improved I it sort of finds like a couple regions that are representative of background and attempts to figure out what the background is just based on that I found it was a lot easier just to figure it out once and just set it to a static value although that reminds me of another problem you know the bright field images the ones that come from the center where the illumination is directly below are obviously quite a bit brighter than the ones off to the side so the question is can you change the exposure does the algorithm allow you to have different exposure levels I actually never quite figured this out I think the short answer is no you actually have to use the same illumination level for the center images as you do for the edge images but again I'm not exactly sure and this is something that is left to the reader as an exercise after we get to the actual algorithm all of this stuff gets pumped into the algorithm and it takes all these images and fills up forier space with all the forier transforms of the input images and you can sort of think of it like filling up a grid where each image that we acquired is a different place in forier space so it puts all these images together stitching them and overlap is important to make the stitching process easier and then once it stitched together all these images in forier space it takes the inverse forier transform of that to go back to a final output image now the challenge is that we need to know the phase information to do the inverse fora transform and we don't have it there's no way to get it because we don't have phase sensitive image sensors so we have to guess and then go through the whole process and check if there's errors when we're done and we can do that because we have the bright Field image so if we do the inverse 4A transform of this assembled image and then we check it against what we actually received with the in the bright field and there's errors we can incorporate that into our phase estimate and do the uh the whole process again so it typically takes about 10 iterations but the benefit is that we also get phase information out of this whole process um a lot of microscopy folks are excited about this technique not even because of the resolution enhancement but because you also get phase information out and for a lot of samples uh they don't really absorb light very well so they don't look good in a microscope but they do shift the phase of the light going through and this technique actually highlights that as well the final thing that I added was this um function to save all the parameters used in the Reconstruction along with a unique file name so I ended up iterating through many different tweaks uh before realizing that the

### [20:00](https://www.youtube.com/watch?v=9KJLWwbs_cQ&t=1200s) Segment 5 (20:00 - 22:00)

Bayer sensor was probably my problem all along I assumed that the reason this wasn't working for me initially is cu I just didn't have all these parameters set up properly so I made this just automatically come up with a new file name to save all the parameters to figure out what I did so anyway if you're interested in this I encourage you to go to the GitHub and download it you don't even need a microscope to play with this there's sample data up there and you can try the Reconstruction out without even having a microscope and if you do have AIC microscope and a monochrome camera give it a shot it's pretty fun to be able to pull out this amazing amount of detail one last note you might be wondering this whole time why don't you just mechanically step over like the whole benefit of this process is that it gives you improved resolution for a wide field of view but you could just use a high magnification objective that has high resolution and just mechanically step over the stage yeah well that's a good point so I don't think this is going to shake up the entire field of microscopy but there are some applications where this does make more sense um in this video I showed you the most sort of obvious way of doing this of collecting one image for each light but as you can imagine the techniques get a lot more interesting where you can have more than one LED on at the same time a coded pattern and you can get away with many fewer Acquisitions so imagine you had 10 LEDs on at the same time pseudo randomly chosen to sample the forier space then you might get away with only 10 Acquisitions instead of a few hundred so then it's actually significantly faster than mechanically stepping a stage over um another benefit is that you are using a low magnification IP um objective and so the working distance can be very high so depending on your sample type you might be forced to have a high working distance this technique can also be extended to fix all kinds of other problems for example after going through this whole reconstruction you can figure out what the optical aberration of your lens system are and compensate for them automatically Ally you can also acquire you know so far we've only been talking about two Dimensions but this whole technique would scale up to three-dimensional acquisition as well and so you know this is really more of a jumping off point even than a final conclusion on what this technique can offer well I hope you found that interesting and I will see you next time bye

---
*Источник: https://ekstraktznaniy.ru/video/42379*