Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
33
result(s) for
"Bajcsy, Peter"
Sort by:
MIST: Accurate and Scalable Microscopy Image Stitching Tool with Stage Modeling and Error Minimization
2017
Automated microscopy can image specimens larger than the microscope’s field of view (FOV) by stitching overlapping image tiles. It also enables time-lapse studies of entire cell cultures in multiple imaging modalities. We created MIST (Microscopy Image Stitching Tool) for rapid and accurate stitching of large 2D time-lapse mosaics. MIST estimates the mechanical stage model parameters (actuator backlash, and stage repeatability ‘
r
’) from computed pairwise translations and then minimizes stitching errors by optimizing the translations within a (4
r
)
2
square area. MIST has a performance-oriented implementation utilizing multicore hybrid CPU/GPU computing resources, which can process terabytes of time-lapse multi-channel mosaics 15 to 100 times faster than existing tools. We created 15 reference datasets to quantify MIST’s stitching accuracy. The datasets consist of three preparations of stem cell colonies seeded at low density and imaged with varying overlap (10 to 50%). The location and size of 1150 colonies are measured to quantify stitching accuracy. MIST generated stitched images with an average centroid distance error that is less than 2% of a FOV. The sources of these errors include mechanical uncertainties, specimen photobleaching, segmentation, and stitching inaccuracies. MIST produced higher stitching accuracy than three open-source tools. MIST is available in ImageJ at isg.nist.gov.
Journal Article
Deep learning predicts function of live retinal pigment epithelium from quantitative microscopy
by
Ouladi, Mohamed
,
Manescu, Petre
,
Schaub, Nicholas J.
in
Accuracy
,
Algorithms
,
Artificial intelligence
2020
Increases in the number of cell therapies in the preclinical and clinical phases have prompted the need for reliable and noninvasive assays to validate transplant function in clinical biomanufacturing. We developed a robust characterization methodology composed of quantitative bright-field absorbance microscopy (QBAM) and deep neural networks (DNNs) to noninvasively predict tissue function and cellular donor identity. The methodology was validated using clinical-grade induced pluripotent stem cell-derived retinal pigment epithelial cells (iPSC-RPE). QBAM images of iPSC-RPE were used to train DNNs that predicted iPSC-RPE monolayer transepithelial resistance, predicted polarized vascular endothelial growth factor (VEGF) secretion, and matched iPSC-RPE monolayers to the stem cell donors. DNN predictions were supplemented with traditional machine-learning algorithms that identified shape and texture features of single cells that were used to predict tissue function and iPSC donor identity. These results demonstrate noninvasive cell therapy characterization can be achieved with QBAM and machine learning.
Journal Article
Towards community-driven metadata standards for light microscopy: tiered specifications extending the OME model
2021
Rigorous record-keeping and quality control are required to ensure the quality, reproducibility and value of imaging data. The 4DN Initiative and BINA here propose light Microscopy Metadata Specifications that extend the OME Data Model, scale with experimental intent and complexity, and make it possible for scientists to create comprehensive records of imaging experiments.
Journal Article
Data-driven simulations for training AI-based segmentation of neutron images
by
Klimov, Nikolai N.
,
Daugherty, M. Cyrus
,
Hussey, Daniel S.
in
639/301/930/12
,
639/301/930/2735
,
639/766/930
2024
Neutron interferometry uniquely combines neutron imaging and scattering methods to enable characterization of multiple length scales from 1 nm to 10 µm. However, building, operating, and using such neutron imaging instruments poses constraints on the acquisition time and on the number of measured images per sample. Experiment time-constraints yield small quantities of measured images that are insufficient for automating image analyses using supervised artificial intelligence (AI) models. One approach alleviates this problem by supplementing annotated measured images with synthetic images. To this end, we create a data-driven simulation framework that supplements training data beyond typical data-driven augmentations by leveraging statistical intensity models, such as the Johnson family of probability density functions (PDFs). We follow the simulation framework steps for an image segmentation task including Estimate PDFs
→
Validate PDFs
→
Design Image Masks
→
Generate Intensities
→
Train AI Model for Segmentation. Our goal is to minimize the manual labor needed to execute the steps and maximize our confidence in simulations and segmentation accuracy. We report results for a set of nine known materials (calibration phantoms) that were imaged using a neutron interferometer acquiring four-dimensional images and segmented by AI models trained with synthetic and measured images and their masks.
Journal Article
Survey statistics of automated segmentations applied to optical imaging of mammalian cells
2015
Background
The goal of this survey paper is to overview cellular measurements using optical microscopy imaging followed by automated image segmentation. The cellular measurements of primary interest are taken from mammalian cells and their components. They are denoted as two- or three-dimensional (2D or 3D) image objects of biological interest. In our applications, such cellular measurements are important for understanding cell phenomena, such as cell counts, cell-scaffold interactions, cell colony growth rates, or cell pluripotency stability, as well as for establishing quality metrics for stem cell therapies. In this context, this survey paper is focused on automated segmentation as a software-based measurement leading to quantitative cellular measurements.
Methods
We define the scope of this survey and a classification schema first. Next, all found and manually filteredpublications are classified according to the main categories: (1) objects of interests (or objects to be segmented), (2) imaging modalities, (3) digital data axes, (4) segmentation algorithms, (5) segmentation evaluations, (6) computational hardware platforms used for segmentation acceleration, and (7) object (cellular) measurements. Finally, all classified papers are converted programmatically into a set of hyperlinked web pages with occurrence and co-occurrence statistics of assigned categories.
Results
The survey paper presents to a reader: (a) the state-of-the-art overview of published papers about automated segmentation applied to optical microscopy imaging of mammalian cells, (b) a classification of segmentation aspects in the context of cell optical imaging, (c) histogram and co-occurrence summary statistics about cellular measurements, segmentations, segmented objects, segmentation evaluations, and the use of computational platforms for accelerating segmentation execution, and (d) open research problems to pursue.
Conclusions
The novel contributions of this survey paper are: (1) a new type of classification of cellular measurements and automated segmentation, (2) statistics about the published literature, and (3) a web hyperlinked interface to classification statistics of the surveyed papers at
https://isg.nist.gov/deepzoomweb/resources/survey/index.html
.
Journal Article
Designing Trojan Detectors in Neural Networks Using Interactive Simulations
by
Majurski, Michael
,
Bajcsy, Peter
,
Schaub, Nicholas J.
in
Artificial intelligence
,
Classification
,
Datasets
2021
This paper addresses the problem of designing trojan detectors in neural networks (NNs) using interactive simulations. Trojans in NNs are defined as triggers in inputs that cause misclassification of such inputs into a class (or classes) unintended by the design of a NN-based model. The goal of our work is to understand encodings of a variety of trojan types in fully connected layers of neural networks. Our approach is: (1) to simulate nine types of trojan embeddings into dot patterns; (2) to devise measurements of NN states; and (3) to design trojan detectors in NN-based classification models. The interactive simulations are built on top of TensorFlow Playground with in-memory storage of data and NN coefficients. The simulations provide analytical, visualization, and output operations performed on training datasets and NN architectures. The measurements of a NN include: (a) model inefficiency using modified Kullback–Liebler (KL) divergence from uniformly distributed states; and (b) model sensitivity to variables related to data and NNs. Using the KL divergence measurements at each NN layer and per each predicted class label, a trojan detector is devised to discriminate NN models with or without trojans. To document robustness of such a trojan detector with respect to NN architectures, dataset perturbations, and trojan types, several properties of the KL divergence measurement are presented.
Journal Article
Modeling, validation and verification of three-dimensional cell-scaffold contacts from terabyte-sized images
by
Florczyk, Stephen J.
,
Szczypinski, Piotr M.
,
Simon, Mylene
in
Algorithms
,
Atomic force microscopy
,
Bioinformatics
2017
Background
Cell-scaffold contact measurements are derived from pairs of co-registered volumetric fluorescent confocal laser scanning microscopy (CLSM) images (z-stacks) of stained cells and three types of scaffolds (i.e., spun coat, large microfiber, and medium microfiber). Our analysis of the acquired terabyte-sized collection is motivated by the need to understand the nature of the shape dimensionality (1D vs 2D vs 3D) of cell-scaffold interactions relevant to tissue engineers that grow cells on biomaterial scaffolds.
Results
We designed five statistical and three geometrical contact models, and then down-selected them to one from each category using a validation approach based on physically orthogonal measurements to CLSM. The two selected models were applied to 414 z-stacks with three scaffold types and all contact results were visually verified. A planar geometrical model for the spun coat scaffold type was validated from atomic force microscopy images by computing surface roughness of 52.35 nm ±31.76 nm which was 2 to 8 times smaller than the CLSM resolution. A cylindrical model for fiber scaffolds was validated from multi-view 2D scanning electron microscopy (SEM) images. The fiber scaffold segmentation error was assessed by comparing fiber diameters from SEM and CLSM to be between 0.46% to 3.8% of the SEM reference values. For contact verification, we constructed a web-based visual verification system with 414 pairs of images with cells and their segmentation results, and with 4968 movies with animated cell, scaffold, and contact overlays. Based on visual verification by three experts, we report the accuracy of cell segmentation to be 96.4% with 94.3% precision, and the accuracy of cell-scaffold contact for a statistical model to be 62.6% with 76.7% precision and for a geometrical model to be 93.5% with 87.6% precision.
Conclusions
The novelty of our approach lies in (1) representing cell-scaffold contact sites with statistical intensity and geometrical shape models, (2) designing a methodology for validating 3D geometrical contact models and (3) devising a mechanism for visual verification of hundreds of 3D measurements. The raw and processed data are publicly available from
https://isg.nist.gov/deepzoomweb/data/
together with the web -based verification system.
Journal Article
Semantic SEM Image Segmentation of Concrete with Contextual Labels
by
Snyder, Kenneth
,
Brady, Mary
,
Feldman, Steve
in
Analytical and Instrumentation Science Symposia
,
Data Acquisition Schemes, Machine Learning Algorithms, and Open Source Software Development for Electron Microscopy
,
Image processing
2019
Journal Article
Exact Tile-Based Segmentation Inference for Images Larger than GPU Memory
by
Majurski, Michael
,
Bajcsy, Peter
in
Analysis
,
Artificial intelligence
,
Artificial neural networks
2021
We address the problem of performing exact (tiling-error free) out-of-core semantic segmentation inference of arbitrarily large images using fully convolutional neural networks (FCN). FCN models have the property that once a model is trained, it can be applied on arbitrarily sized images, although it is still constrained by the available GPU memory. This work is motivated by overcoming the GPU memory size constraint without numerically impacting the fnal result. Our approach is to select a tile size that will ft into GPU memory with a halo border of half the network receptive feld. Next, stride across the image by that tile size without the halo. The input tile halos will overlap, while the output tiles join exactly at the seams. Such an approach enables inference to be performed on whole slide microscopy images, such as those generated by a slide scanner. The novelty of this work is in documenting the formulas for determining tile size and stride and then validating them on U-Net and FC-DenseNet architectures. In addition, we quantify the errors due to tiling confgurations which do not satisfy the constraints, and we explore the use of architecture effective receptive felds to estimate the tiling parameters.
Journal Article
Interactive Web-based Spatio-Statistical Image Modeling from Gigapixel Images to Improve Discovery and Traceability of Published Statistical Models
by
Vandecreme, Antoine
,
Brady, Mary
,
Bajcsy, Peter
in
Advances in Image Processing, Display and Analysis
,
Analytical and Instrumentation Science Symposia
,
JavaScript
2016
Journal Article