New paper: deep learning can "teach" a microscope how to capture better images

malaria classification with deep learning

Deep learning algorithms have become a popular way to extract useful information from images (e.g., to classify, segment and track objects). We've come up with a slightly different application for deep learning: we're using it to extract more useful information from the physical world, as opposed to just images of the world. By modifying the pipeline of a convolutional neural network, we can now optimize the physical layout of an imaging device itself - namely, the illumination of an optical microscope - to extract more useful information from a collection of cells that are infected with the malaria virus than would otherwise be possible with an alternative microscope layout. Our CNN-designed microscope can better classify occurances of infection with the malaria virus than all tested alternatives by 5-10% on average, so hopefully it'll be pretty useful in the future!

R. Horstmeyer, R. Y. Chen, B. Kappes and B. Judkewitz, "Convolutional neural networks that teach microscopes how to image," In submission, pre-print on arXiv, 2017.