Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
47,851
result(s) for
"Three-dimensional imaging"
Sort by:
Geometric and Topological Mesh Feature Extraction for 3D Shape Analysis
Three-dimensional surface meshes are the most common discrete representation of the exterior of a virtual shape. Extracting relevant geometric or topological features from them can simplify the way objects are looked at, help with their recognition, and facilitate description and categorization according to specific criteria. This book adopts the point of view of discrete mathematics, the aim of which is to propose discrete counterparts to concepts mathematically defined in continuous terms. It explains how standard geometric and topological notions of surfaces can be calculated and computed on a 3D surface mesh, as well as their use for shape analysis. Several applications are also detailed, demonstrating that each of them requires specific adjustments to fit with generic approaches. The book is intended not only for students, researchers and engineers in computer science and shape analysis, but also numerical geologists, anthropologists, biologists and other scientists looking for practical solutions to their shape analysis, understanding or recognition problems.
Usefulness of three-dimensional printing of superior mesenteric vessels in right hemicolon cancer surgery
2020
The anatomy of the superior mesenteric vessels is complex, yet important, for right-sided colorectal surgery. The usefulness of three-dimensional (3D) printing of these vessels in right hemicolon cancer surgery has rarely been reported. In this prospective clinical study, 61 patients who received laparoscopic surgery for right hemicolon cancer were preoperatively randomized into 3 groups: 3D-printing (20 patients), 3D-image (19 patients), and control (22 patients) groups. Surgery duration, bleeding volume, and number of lymph node dissections were designed to be the primary end points, whereas postoperative complications, post-operative flatus recovery time, duration of hospitalization, patient satisfaction, and medical expenses were designed to be secondary end points. To reduce the influence of including different surgeons in the study, the surgical team was divided into 2 groups based on surgical experience. The duration of surgery for the 3D-printing and 3D-image groups was significantly reduced (138.4 ± 19.5 and 154.7 ± 25.9 min vs. 177.6 ± 24.4 min,
P
= 0.000 and
P
= 0.006), while the number of lymph node dissections for the these 2 groups was significantly increased (19.1 ± 3.8 and 17.6 ± 3.9 vs. 15.8 ± 3.0,
P
= 0.001 and
P
= 0.024) compared to the control group. Meanwhile, the bleeding volume for the 3D-printing group was significantly reduced compared to the control group (75.8 ± 30.4 mL vs. 120.9 ± 39.1 mL,
P
= 0.000). Moreover, patients in the 3D-printing group reported increased satisfaction in terms of effective communication compared to those in the 3D-image and control groups. Medical expenses decreased by 6.74% after the use of 3D-printing technology. Our results show that 3D-printing technology could reduce the duration of surgery and total bleeding volume and increase the number of lymph node dissections. 3D-printing technology may be more helpful for novice surgeons.
Trial registration
: Chinese Clinical Trial Registry, ChiCTR1800017161. Registered on 15 July 2018.
Journal Article
MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification
by
Pfister, Hanspeter
,
Wei, Donglai
,
Yang, Jiancheng
in
631/114/1305
,
706/648/697/129
,
Algorithms
2023
We introduce
MedMNIST v2
, a large-scale MNIST-like dataset collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D. All images are pre-processed into a small size of 28 × 28 (2D) or 28 × 28 × 28 (3D) with the corresponding classification labels so that no background knowledge is required for users. Covering primary data modalities in biomedical images, MedMNIST v2 is designed to perform classification on lightweight 2D and 3D images with various dataset scales (from 100 to 100,000) and diverse tasks (binary/multi-class, ordinal regression, and multi-label). The resulting dataset, consisting of 708,069 2D images and 9,998 3D images in total, could support numerous research/educational purposes in biomedical image analysis, computer vision, and machine learning. We benchmark several baseline methods on MedMNIST v2, including 2D/3D neural networks and open-source/commercial AutoML tools. The data and code are publicly available at
https://medmnist.com/
.
Measurement(s)
supervised machine learning
Technology Type(s)
machine learning
Journal Article
MINFLUX nanometer-scale 3D imaging and microsecond-range tracking on a common fluorescence microscope
by
Schmidt, Roman
,
Wurm, Christian A.
,
Weihs, Tobias
in
14/63
,
631/1647/245/2225
,
631/1647/328/2238
2021
The recently introduced minimal photon fluxes (MINFLUX) concept pushed the resolution of fluorescence microscopy to molecular dimensions. Initial demonstrations relied on custom made, specialized microscopes, raising the question of the method’s general availability. Here, we show that MINFLUX implemented with a standard microscope stand can attain 1–3 nm resolution in three dimensions, rendering fluorescence microscopy with molecule-scale resolution widely applicable. Advances, such as synchronized electro-optical and galvanometric beam steering and a stabilization that locks the sample position to sub-nanometer precision with respect to the stand, ensure nanometer-precise and accurate real-time localization of individually activated fluorophores. In our MINFLUX imaging of cell- and neurobiological samples, ~800 detected photons suffice to attain a localization precision of 2.2 nm, whereas ~2500 photons yield precisions <1 nm (standard deviation). We further demonstrate 3D imaging with localization precision of ~2.4 nm in the focal plane and ~1.9 nm along the optic axis. Localizing with a precision of <20 nm within ~100 µs, we establish this spatio-temporal resolution in single fluorophore tracking and apply it to the diffusion of single labeled lipids in lipid-bilayer model membranes.
Minimal photon fluxes (MINFLUX) has enabled molecule-scale resolution in fluorescence microscopy but this had not been shown in standard, broadly applicable microscopy platforms. Here the authors report a solution to allow normal fluorescence microscopy while also providing 1-3 nm 3D resolution.
Journal Article
3DeeCellTracker, a deep learning-based pipeline for segmenting and tracking cells in 3D time lapse images
by
Kimura, Koutarou D
,
Nemoto, Tomomi
,
Yamaguchi, Kazushi
in
Animals
,
bioimaging
,
Brain - diagnostic imaging
2021
Despite recent improvements in microscope technologies, segmenting and tracking cells in three-dimensional time-lapse images (3D + T images) to extract their dynamic positions and activities remains a considerable bottleneck in the field. We developed a deep learning-based software pipeline, 3DeeCellTracker, by integrating multiple existing and new techniques including deep learning for tracking. With only one volume of training data, one initial correction, and a few parameter changes, 3DeeCellTracker successfully segmented and tracked ~100 cells in both semi-immobilized and ‘straightened’ freely moving worm's brain, in a naturally beating zebrafish heart, and ~1000 cells in a 3D cultured tumor spheroid. While these datasets were imaged with highly divergent optical systems, our method tracked 90–100% of the cells in most cases, which is comparable or superior to previous results. These results suggest that 3DeeCellTracker could pave the way for revealing dynamic cell activities in image datasets that have been difficult to analyze. Microscopes have been used to decrypt the tiny details of life since the 17th century. Now, the advent of 3D microscopy allows scientists to build up detailed pictures of living cells and tissues. In that effort, automation is becoming increasingly important so that scientists can analyze the resulting images and understand how bodies grow, heal and respond to changes such as drug therapies. In particular, algorithms can help to spot cells in the picture (called cell segmentation), and then to follow these cells over time across multiple images (known as cell tracking). However, performing these analyses on 3D images over a given period has been quite challenging. In addition, the algorithms that have already been created are often not user-friendly, and they can only be applied to a specific dataset gathered through a particular scientific method. As a response, Wen et al. developed a new program called 3DeeCellTracker, which runs on a desktop computer and uses a type of artificial intelligence known as deep learning to produce consistent results. Crucially, 3DeeCellTracker can be used to analyze various types of images taken using different types of cutting-edge microscope systems. And indeed, the algorithm was then harnessed to track the activity of nerve cells in moving microscopic worms, of beating heart cells in a young small fish, and of cancer cells grown in the lab. This versatile tool can now be used across biology, medical research and drug development to help monitor cell activities.
Journal Article