Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
113 result(s) for "Psaltis, Demetri"
Sort by:
High speed, complex wavefront shaping using the digital micro-mirror device
Digital micro-mirror devices (DMDs) have been deployed in many optical applications. As compared to spatial light modulators (SLMs), they are characterized by their much faster refresh rates (full-frame refresh rates up to 32 kHz for binary patterns) compared to 120 Hz for most liquid crystal SLMs. DMDs however can only display binary, unipolar patterns and utilize temporal modulation to represent with excellent accuracy multiple gray-levels in display applications. We used the built-in time domain dynamic range representation of the DMD to project 8-bit complex-fields. With this method, we demonstrated 8-bit complex field modulation with a frame time of 38.4 ms (around 0.15 s for the entire complex-field). We performed phase conjugation by compensating the distortions incurred due to propagation through free-space and a scattering medium. For faster modulation speed, an electro-optic modulator was used in synchronization with the DMD in an amplitude modulation mode to create grayscale patterns with frame rate ~ 833 Hz with display time of only 1.2 ms instead of 38.4 ms for time multiplexing gaining a speed up by a factor of 32.
Competitive photonic neural networks
Photonics offers high hopes for next-generation neural network processors. Now it has been shown that even entirely using off-the-shelf photonics allows surpassing speed and energy efficiency of cutting-edge GPUs.
Stain-free identification of cell nuclei using tomographic phase microscopy in flow cytometry
Quantitative phase imaging has gained popularity in bioimaging because it can avoid the need for cell staining, which, in some cases, is difficult or impossible. However, as a result, quantitative phase imaging does not provide the labelling of various specific intracellular structures. Here we show a novel computational segmentation method based on statistical inference that makes it possible for quantitative phase imaging techniques to identify the cell nucleus. We demonstrate the approach with refractive index tomograms of stain-free cells reconstructed using tomographic phase microscopy in the flow cytometry mode. In particular, by means of numerical simulations and two cancer cell lines, we demonstrate that the nucleus can be accurately distinguished within the stain-free tomograms. We show that our experimental results are consistent with confocal fluorescence microscopy data and microfluidic cyto-fluorimeter outputs. This is a remarkable step towards directly extracting specific three-dimensional intracellular structures from the phase contrast data in a typical flow cytometry configuration.The accurate identification of the three-dimensional quantitative shape of a cell nucleus is now possible without fluorescent staining by applying computational segmentation to refractive index tomograms recorded in the flow cytometry mode.
Learning from droplet flows in microfluidic channels using deep neural networks
A non-intrusive method is presented for measuring different fluidic properties in a microfluidic chip by optically monitoring the flow of droplets. A neural network is used to extract the desired information from the images of the droplets. We demonstrate the method in two applications: measurement of the concentration of each component of a water/alcohol mixture, and measurement of the flow rate of the same mixture. A large number of droplet images are recorded and used to train deep neural networks (DNN) to predict the flow rate or the concentration. It is shown that this method can be used to quantify the concentrations of each component with a 0.5% accuracy and the flow rate with a resolution of 0.05 ml/h. The proposed method can in principle be used to measure other properties of the fluid such as surface tension and viscosity.
Developing optofluidic technology through the fusion of microfluidics and optics
We describe devices in which optics and fluidics are used synergistically to synthesize novel functionalities. Fluidic replacement or modification leads to reconfigurable optical systems, whereas the implementation of optics through the microfluidic toolkit gives highly compact and integrated devices. We categorize optofluidics according to three broad categories of interactions: fluid–solid interfaces, purely fluidic interfaces and colloidal suspensions. We describe examples of optofluidic devices in each category.
An actor-model framework for visual sensory encoding
A fundamental challenge in neuroengineering is determining a proper artificial input to a sensory system that yields the desired perception. In neuroprosthetics, this process is known as artificial sensory encoding, and it holds a crucial role in prosthetic devices restoring sensory perception in individuals with disabilities. For example, in visual prostheses, one key aspect of artificial image encoding is to downsample images captured by a camera to a size matching the number of inputs and resolution of the prosthesis. Here, we show that downsampling an image using the inherent computation of the retinal network yields better performance compared to learning-free downsampling methods. We have validated a learning-based approach (actor-model framework) that exploits the signal transformation from photoreceptors to retinal ganglion cells measured in explanted mouse retinas. The actor-model framework generates downsampled images eliciting a neuronal response in-silico and ex-vivo with higher neuronal reliability than the one produced by a learning-free approach. During the learning process, the actor network learns to optimize contrast and the kernel’s weights. This methodological approach might guide future artificial image encoding strategies for visual prostheses. Ultimately, this framework could be applicable for encoding strategies in other sensory prostheses such as cochlear or limb. Encoding and downsampling images is key for visual prostheses. Here, the authors show that an actor-model framework using the inherent computation of the retinal network yields better performance in downsampling images compared to learning-free methods.
Optical phase conjugation for turbidity suppression in biological samples
Elastic optical scattering, the dominant light-interaction process in biological tissues, prevents tissues from being transparent. Although scattering may appear stochastic, it is in fact deterministic in nature. We show that, despite experimental imperfections, optical phase conjugation ( λ  = 532 nm) can force a transmitted light field to retrace its trajectory through a biological target and recover the original light field. For a 0.69-mm-thick chicken breast tissue section, we can enhance point-source light return by a factor of ∼5×10 3 and achieve a light transmission enhancement factor of 3.8 within a collection angle of 29°. Additionally, we find that the reconstruction's quality, measured by the width of the reconstructed point source, is independent of tissue thickness (up to a thickness of 0.69 mm). This phenomenon may be used to enhance light transmission through tissue, enable measurement of small tissue movements, and form the basis of new tissue imaging techniques.
Actor neural networks for the robust control of partially measured nonlinear systems showcased for image propagation through diffuse media
The output of physical systems, such as the scrambled pattern formed by shining the spot of a laser pointer through fog, is often easily accessible by direct measurements. However, selection of the input of such a system to obtain a desired output is difficult, because it is an ill-posed problem; that is, there are multiple inputs yielding the same output. Information transmission through scattering media is an example of this problem. Machine learning approaches for imaging have been implemented very successfully in photonics to recover the original input phase and amplitude objects of the scattering system from the distorted intensity diffraction pattern outputs. However, controlling the output of such a system, without having examples of inputs that can produce outputs in the class of the output objects the user wants to produce, is a challenging problem. Here, we propose an online learning approach for the projection of arbitrary shapes through a multimode fibre when a sample of intensity-only measurements is taken at the output. This projection system is nonlinear, because the intensity, not the complex amplitude, is detected. We show an image projection fidelity as high as ~90%, which is on par with the gold-standard methods that characterize the system fully by phase and amplitude measurements. The generality and simplicity of the proposed approach could potentially provide a new way of target-oriented control in real-world applications when only partial measurements are available. Machine learning has become popular in solving complex optical problems such as recovering the input phase and amplitude for a specific pattern or image measured through a scattering medium. In a more challenging application, Rahmani et al. consider the problem of also producing desired outputs for such a nonlinear system when only some intensity-only measurements of example outputs are available. They develop a neural network approach that can ensure the transmission of images through a highly nonlinear system—a multimode fibre—with a 90% fidelity.
Lensless high-resolution on-chip optofluidic microscopes for Caenorhabditis elegans and cell imaging
Low-cost and high-resolution on-chip microscopes are vital for reducing cost and improving efficiency for modern biomedicine and bioscience. Despite the needs, the conventional microscope design has proven difficult to miniaturize. Here, we report the implementation and application of two high-resolution ([almost equal to]0.9 μm for the first and [almost equal to]0.8 μm for the second), lensless, and fully on-chip microscopes based on the optofluidic microscopy (OFM) method. These systems abandon the conventional microscope design, which requires expensive lenses and large space to magnify images, and instead utilizes microfluidic flow to deliver specimens across array(s) of micrometer-size apertures defined on a metal-coated CMOS sensor to generate direct projection images. The first system utilizes a gravity-driven microfluidic flow for sample scanning and is suited for imaging elongate objects, such as Caenorhabditis elegans; and the second system employs an electrokinetic drive for flow control and is suited for imaging cells and other spherical/ellipsoidal objects. As a demonstration of the OFM for bioscience research, we show that the prototypes can be used to perform automated phenotype characterization of different Caenorhabditis elegans mutant strains, and to image spores and single cellular entities. The optofluidic microscope design, readily fabricable with existing semiconductor and microfluidic technologies, offers low-cost and highly compact imaging solutions. More functionalities, such as on-chip phase and fluorescence imaging, can also be readily adapted into OFM systems. We anticipate that the OFM can significantly address a range of biomedical and bioscience needs, and engender new microscope applications.