Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
112,643
result(s) for
"bench"
Sort by:
Impact of Cryopreservation and Freeze-Thawing on Therapeutic Properties of Mesenchymal Stromal/Stem Cells and Other Common Cellular Therapeutics
by
Porter, Amanda Paige
,
Nguyen, Jimmy
,
Turner-Lyles, Caitlin
in
Biomedical and Life Sciences
,
Biomedical Engineering/Biotechnology
,
Biomedicine
2022
Purpose of Review
Cryopreservation and its associated freezing and thawing procedures–short “freeze-thawing”–are among the final steps in economically viable manufacturing and clinical application of diverse cellular therapeutics. Translation from preclinical proof-of-concept studies to larger clinical trials has indicated that these processes may potentially present an Achilles heel to optimal cell product safety and particularly efficacy in clinical trials and routine use.
Recent Findings
We review the current state of the literature on how cryopreservation of cellular therapies has evolved and how the application of this technique to different cell types is interlinked with their ability to engraft and function upon transfer in vivo, in particular for hematopoietic stem and progenitor cells (HSPCs), their progeny, and therapeutic cell products derived thereof. We also discuss pros and cons how this may differ for non-hematopoietic mesenchymal stromal/stem cell (MSC) therapeutics. We present different avenues that may be crucial for cell therapy optimization, both, for hematopoietic (e.g., effector, regulatory, and chimeric antigen receptor (CAR)-modified T and NK cell based products) and for non-hematopoietic products, such as MSCs and induced pluripotent stem cells (iPSCs), to achieve optimal viability, recovery, effective cell dose, and functionality of the cryorecovered cells.
Summary
Targeted research into optimizing the cryopreservation and freeze-thawing routines and the adjunct manufacturing process design may provide crucial advantages to increase both the safety and efficacy of cellular therapeutics in clinical use and to enable effective market deployment strategies to become economically viable and sustainable medicines.
Journal Article
Adipose Tissue and Mesenchymal Stem Cells: State of the Art and Lipogems® Technology Development
by
Ventura, Carlo
,
Colombo, Valeria
,
Tremolada, Carlo
in
Adipocytes
,
Adipose tissue
,
Biomedical and Life Sciences
2016
In the past few years, interest in adipose tissue as an ideal source of mesenchymal stem cells (MSCs) has increased. These cells are multipotent and may differentiate in vitro into several cellular lineages, such as adipocytes, chondrocytes, osteoblasts, and myoblasts. In addition, they secrete many bioactive molecules and thus are considered “mini-drugstores.” MSCs are being used increasingly for many clinical applications, such as orthopedic, plastic, and reconstructive surgery. Adipose-derived MSCs are routinely obtained enzymatically from fat lipoaspirate as SVF and/or may undergo prolonged ex vivo expansion, with significant senescence and a decrease in multipotency, leading to unsatisfactory clinical results. Moreover, these techniques are hampered by complex regulatory issues. Therefore, an innovative technique (Lipogems®; Lipogems International SpA, Milan, Italy) was developed to obtain microfragmented adipose tissue with an intact stromal vascular niche and MSCs with a high regenerative capacity. The Lipogems® technology, patented in 2010 and clinically available since 2013, is an easy-to-use system designed to harvest, process, and inject refined fat tissue and is characterized by optimal handling ability and a great regenerative potential based on adipose-derived MSCs. In this novel technology, the adipose tissue is washed, emulsified, and rinsed and adipose cluster dimensions gradually are reduced to about 0.3 to 0.8 mm. In the resulting Lipogems® product, pericytes are retained within an intact stromal vascular niche and are ready to interact with the recipient tissue after transplantation, thereby becoming MSCs and starting the regenerative process. Lipogems® has been used in more than 7000 patients worldwide in aesthetic medicine and surgery, as well as in orthopedic and general surgery, with remarkable and promising results and seemingly no drawbacks. Now, several clinical trials are under way to support the initial encouraging outcomes. Lipogems® technology is emerging as a valid intraoperative system to obtain an optimal final product that may be used immediately for regenerative purposes.
Journal Article
Butterfly optimization algorithm: a novel approach for global optimization
2019
Real-world problems are complex as they are multidimensional and multimodal in nature that encourages computer scientists to develop better and efficient problem-solving methods. Nature-inspired metaheuristics have shown better performances than that of traditional approaches. Till date, researchers have presented and experimented with various nature-inspired metaheuristic algorithms to handle various search problems. This paper introduces a new nature-inspired algorithm, namely butterfly optimization algorithm (BOA) that mimics food search and mating behavior of butterflies, to solve global optimization problems. The framework is mainly based on the foraging strategy of butterflies, which utilize their sense of smell to determine the location of nectar or mating partner. In this paper, the proposed algorithm is tested and validated on a set of 30 benchmark test functions and its performance is compared with other metaheuristic algorithms. BOA is also employed to solve three classical engineering problems (spring design, welded beam design, and gear train design). Results indicate that the proposed BOA is more efficient than other metaheuristic algorithms.
Journal Article
A new algorithm for normal and large-scale optimization problems: Nomadic People Optimizer
2020
Metaheuristic algorithms have received much attention recently for solving different optimization and engineering problems. Most of these methods were inspired by nature or the behavior of certain swarms, such as birds, ants, bees, or even bats, while others were inspired by a specific social behavior such as colonies, or political ideologies. These algorithms faced an important issue, which is the balancing between the global search (exploration) and local search (exploitation) capabilities. In this research, a novel swarm-based metaheuristic algorithm which depends on the behavior of nomadic people was developed, it is called “Nomadic People Optimizer (NPO)”. The proposed algorithm simulates the nature of these people in their movement and searches for sources of life (such as water or grass for grazing), and how they have lived hundreds of years, continuously migrating to the most comfortable and suitable places to live. The algorithm was primarily designed based on the multi-swarm approach, consisting of several clans and each clan looking for the best place, in other words, for the best solution depending on the position of their leader. The algorithm is validated based on 36 unconstrained benchmark functions. For the comparison purpose, six well-established nature-inspired algorithms are performed for evaluating the robustness of NPO algorithm. The proposed and the benchmark algorithms are tested for large-scale optimization problems which are associated with high-dimensional variability. The attained results demonstrated a remarkable solution for the NPO algorithm. In addition, the achieved results evidenced the potential high convergence, lower iterations, and less time-consuming required for finding the current best solution.
Journal Article
Image Matching Across Wide Baselines: From Paper to Practice
2021
We introduce a comprehensive benchmark for local features and robust estimation algorithms, focusing on the downstream task—the accuracy of the reconstructed camera pose—as our primary metric. Our pipeline’s modular structure allows easy integration, configuration, and combination of different methods and heuristics. This is demonstrated by embedding dozens of popular algorithms and evaluating them, from seminal works to the cutting edge of machine learning research. We show that with proper settings, classical solutions may still outperform the perceived state of the art. Besides establishing the actual state of the art, the conducted experiments reveal unexpected properties of structure from motion pipelines that can help improve their performance, for both algorithmic and learned methods. Data and code are online (https://github.com/ubc-vision/image-matching-benchmark), providing an easy-to-use and flexible framework for the benchmarking of local features and robust estimation methods, both alongside and against top-performing methods. This work provides a basis for the Image Matching Challenge (https://image-matching-challenge.github.io).
Journal Article
Benchmarking Low-Light Image Enhancement and Beyond
2021
In this paper, we present a systematic review and evaluation of existing single-image low-light enhancement algorithms. Besides the commonly used low-level vision oriented evaluations, we additionally consider measuring machine vision performance in the low-light condition via face detection task to explore the potential of joint optimization of high-level and low-level vision enhancement. To this end, we first propose a large-scale low-light image dataset serving both low/high-level vision with diversified scenes and contents as well as complex degradation in real scenarios, called Vision Enhancement in the LOw-Light condition (VE-LOL). Beyond paired low/normal-light images without annotations, we additionally include the analysis resource related to human, i.e. face images in the low-light condition with annotated face bounding boxes. Then, efforts are made on benchmarking from the perspective of both human and machine visions. A rich variety of criteria is used for the low-level vision evaluation, including full-reference, no-reference, and semantic similarity metrics. We also measure the effects of the low-light enhancement on face detection in the low-light condition. State-of-the-art face detection methods are used in the evaluation. Furthermore, with the rich material of VE-LOL, we explore the novel problem of joint low-light enhancement and face detection. We develop an enhanced face detector to apply low-light enhancement and face detection jointly. The features extracted by the enhancement module are fed to the successive layer with the same resolution of the detection module. Thus, these features are intertwined together to unitedly learn useful information across two phases, i.e. enhancement and detection. Experiments on VE-LOL provide a comparison of state-of-the-art low-light enhancement algorithms, point out their limitations, and suggest promising future directions. Our dataset has supported the Track “Face Detection in Low Light Conditions” of CVPR UG2+ Challenge (2019–2020) (http://cvpr2020.ug2challenge.org/).
Journal Article
LaSOT: A High-quality Large-scale Single Object Tracking Benchmark
2021
Despite great recent advances in visual tracking, its further development, including both algorithm design and evaluation, is limited due to lack of dedicated large-scale benchmarks. To address this problem, we present LaSOT, a high-quality Large-scale Single Object Tracking benchmark. LaSOT contains a diverse selection of 85 object classes, and offers 1550 totaling more than 3.87 million frames. Each video frame is carefully and manually annotated with a bounding box. This makes LaSOT, to our knowledge, the largest densely annotated tracking benchmark. Our goal in releasing LaSOT is to provide a dedicated high quality platform for both training and evaluation of trackers. The average video length of LaSOT is around 2500 frames, where each video contains various challenge factors that exist in real world video footage,such as the targets disappearing and re-appearing. These longer video lengths allow for the assessment of long-term trackers. To take advantage of the close connection between visual appearance and natural language, we provide language specification for each video in LaSOT. We believe such additions will allow for future research to use linguistic features to improve tracking. Two protocols, full-overlap and one-shot, are designated for flexible assessment of trackers. We extensively evaluate 48 baseline trackers on LaSOT with in-depth analysis, and results reveal that there still exists significant room for improvement. The complete benchmark, tracking results as well as analysis are available at http://vision.cs.stonybrook.edu/~lasot/.
Journal Article
Update: use of the benchmark dose approach in risk assessment
by
Jeger, Michael John
,
Noteborn, Hubert
,
Hardy, Anthony
in
benchmark dose
,
benchmark response
,
Benchmarks
2017
The Scientific Committee (SC) reconfirms that the benchmark dose (BMD) approach is a scientifically more advanced method compared to the NOAEL approach for deriving a Reference Point (RP). Most of the modifications made to the SC guidance of 2009 concern the section providing guidance on how to apply the BMD approach. Model averaging is recommended as the preferred method for calculating the BMD confidence interval, while acknowledging that the respective tools are still under development and may not be easily accessible to all. Therefore, selecting or rejecting models is still considered as a suboptimal alternative. The set of default models to be used for BMD analysis has been reviewed, and the Akaike information criterion (AIC) has been introduced instead of the log‐likelihood to characterise the goodness of fit of different mathematical models to a dose–response data set. A flowchart has also been inserted in this update to guide the reader step‐by‐step when performing a BMD analysis, as well as a chapter on the distributional part of dose–response models and a template for reporting a BMD analysis in a complete and transparent manner. Finally, it is recommended to always report the BMD confidence interval rather than the value of the BMD. The lower bound (BMDL) is needed as a potential RP, and the upper bound (BMDU) is needed for establishing the BMDU/BMDL per ratio reflecting the uncertainty in the BMD estimate. This updated guidance does not call for a general re‐evaluation of previous assessments where the NOAEL approach or the BMD approach as described in the 2009 SC guidance was used, in particular when the exposure is clearly smaller (e.g. more than one order of magnitude) than the health‐based guidance value. Finally, the SC firmly reiterates to reconsider test guidelines given the expected wide application of the BMD approach.
http://onlinelibrary.wiley.com/doi/10.2903/sp.efsa.2017.EN-1147/full
Journal Article
ImageNet Large Scale Visual Recognition Challenge
2015
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.
Journal Article