A complicated name for an easy way to make better microscopes
Right now, all of the images we take (whether with our cell phone cameras or in a microscope) have several to tens of million pixels in them. This isn’t a coincidence. Ever since the first lens was designed many centuries ago, so was the first aberration, which causes the resulting image to appear blurry. Only within a certain “sweet spot” does an image actually appear sharp and clear. This “sweet spot”, also called the lens field-of-view, limits all of the images we take to only contain megapixels, instead of gigapixels.
Ptychography (with a silent 'p') uses computation to significantly improve upon this sweet spot. First, it captures multiple images using a lens with a very large field-of-view, but which otherwise exhibits poor resolution. Second, it combines the captured images together into a very high resolution reconstruction, using a phase retrieval algorithm (more on this below):
At the end of the day, we can increase the number of resolvable pixels in an image by a factor of approximately 50-100, creating some of the first gigapixel images in an imaging system without any moving parts. The technique also increases the microscope working distance and depth-of-field, computationally corrects for system aberrations, and removes the need for oil immersion.
A critical component of our new approach is the use of an LED array, which we use to illuminate our microscope sample from a number of different angles. Each time we turn on a different LED, we capture a unique image. The light from each angled LED effectively shifts new information emerging from the sample into the microscope lens. The sequence of captured images contains enough information to allow us to computationally recover very high resolution sample features (so far, down to approximately 300 nanometers).
Here are some recent papers that we've published on this topic: