Deep Learning for Intelligent Microscopy

We're using machine learning algorithms to design new types of microscopes 

We have used microscopes to discover new phenomena for hundreds of years. Thanks to the digital image sensor and computer, much of this discovery work has now started to become automated. A variety of "deep" machine learning algorithms now automatically process digital microscope images to find, classify and interpret relevant phenomena, such as the indications of disease, or the presence of certain cells in an assay, or even a fully automated diagnosis

Despite their automation, microscopes themselves have still changed relatively little - they are, for the most part, still optimized for a human viewer to peer through to examine a sample in detail, which presents a number of challenges in the clinic. The diagnosis of infection by the malaria parasite offers a good example. Due to their small size (approximately 1 micron or less), the malaria parasite (P. Falciparum) must be viewed under a high-resolution objective lens (typically with oil immersion). Unfortunately, such high-resolution lenses can only see a very small area, containing just a few dozen cells. As the infection density of the malaria parasite is relatively low, one must scan through at least 100 unique fields-of-view to find enough examples to offer a sound diagnosis. This is true either if a human or an algorithm is viewing the images of each specimen - hundreds of images are still needed, which leads to a bottleneck in the diagnosis pipeline.

The Computational Optics Lab is currently solving problems like the one above by creating new microscopes, which are designed by deep learning algorithms, to ensure that their captured image data contains a maximum amount of information for the algorithm's specific task. This is a joint hardware-software optimization effort. In effect, we hope to turn the microscope into an "intelligent" agent, whose goal is to physically probe each specimen to allow the computer to learn as much as possible from it. Different optimizable hardware components that our lab has or currently is exploring inlcude programmable illumination, the optical pathway, and the detector and data management pipeline. Here are some of our current projects related to this "learned sensing" effort:

1. Learned sensing for optimized microscope illumination   

Associated papers:

"Learned sensing: jointly optimized microscope hardware for accurate image classification," Biomed. Opt. Express (2019)

"Learned Integrated Sensing Pipeline: Reconfigurable Metasurface Transceivers as Trainable Physical Layer in an Artificial Neural Network", Advanced Science (2019)

Project page with data and source codehttp://deepimaging.io/learned_sensing_dnn/  

To significantly improve the speed and accuracy of disease diagnosis via light microscopy, we made two key modifications to the standard microscope: 1) we added a micro-LED unit that is optimized to illuminate each sample to highlight important features of interest  (e.g., the malaria parasite within blood smears), and 2) we used a deep convolutional neural network to jointly optimize this illumination unit to automatically detect the presence of infection within the uniquely illuminated images. 

Our setup

Working together, our two insights allow us to achieve classification accuracies in the 95 percentile range using large field-of-view, low-resolution microscope objective lenses that can see thousands of cells simultaneously (as opposed to just dozens of cells). This removes the need for mechanical scanning to obtain an accurate diagnosis, subsequently offering a dramatic speedup to the current diagnosis pipeline (i.e., from 10 minutes for manual inspection to just a few seconds for automatic inspection). 

cell

 

2. Adaptively learned illumination for optimal sample classification   

Associated paper: "Towards an Intelligent Microscope: adaptively learned illumination for optimal sample classification," arXiv (2019)

Project page with data and source codehttp://deepimaging.io/recurrent-illuminated-attention/

 

               Our setup  

The Learned Sensing approach outlined above uses a convolutional neural network to establish optimized hardware settings. Here, we turned hardware optimization into a dynamic process, wherein we aim to teach the microscope how to interact with the specimen as it captures multiple images. To do so, we have turned to a reinforcement learning algorithm that treats the microscope as an agent, which can make dynamic decisions (how should I illuminate the sample next? How should I change the sample position? How should I filter the resulting scattered light?) on-the-fly during the image capture process.