Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
18,467
result(s) for
"Comparison analysis"
Sort by:
Natural Forest Mapping in the Andes (Peru): A Comparison of the Performance of Machine-Learning Algorithms
by
Vega Isuhuaylas, Luis
,
Serrudo Torobeo, Noemi
,
Ventura Santos, Lenin
in
Algorithms
,
Classification
,
Classifiers
2018
The Andes mountain forests are sparse relict populations of tree species that grow in association with local native shrubland species. The identification of forest conditions for conservation in areas such as these is based on remote sensing techniques and classification methods. However, the classification of Andes mountain forests is difficult because of noise in the reflectance data within land cover classes. The noise is the result of variations in terrain illumination resulting from complex topography and the mixture of different land cover types occurring at the sub-pixel level. Considering these issues, the selection of an optimum classification method to obtain accurate results is very important to support conservation activities. We carried out comparative non-parametric statistical analyses on the performance of several classifiers produced by three supervised machine-learning algorithms: Random Forest (RF), Support Vector Machine (SVM), and k-Nearest Neighbor (kNN). The SVM and RF methods were not significantly different in their ability to separate Andes mountain forest and shrubland land cover classes, and their best classifiers showed a significantly better classification accuracy (AUC values 0.81 and 0.79 respectively) than the one produced by the kNN method (AUC value 0.75) because the latter was more sensitive to noisy training data.
Journal Article
Information Extraction and Spatial Distribution of Research Hot Regions on Rocky Desertification in China
2018
Rocky desertification is an important type of ecological degradation in southwest of China. The author uses the web crawler technology and obtained 9345 journal papers related to rocky desertification from 1950s to 2016 in China. The authors also constructed a technological process to extract research hot regions on rocky desertification and then a spatial distribution map of research hot regions on rocky desertification was formed. Finally, the authors compared the spatial distribution using the sensitivity map of rocky desertification to find the differences between two maps. The study shows that: (1) rocky desertification research hot regions in China are mainly distributed in Guizhou, Yunnan and Guangxi, especially in Bijie, Liupanshui, Guiyang, Anshun, Qianxinan Autonomous Prefecture, QianNan Autonomous Prefecture, Qiandongnan Autonomous Prefecture in Guizhou Province, Hechi, Baise, Nanning, Guilin in Guangxi Zhuang Autonomous Region and Zhaotong in Yunnan Province. (2) The research hot regions on rocky desertification have good spatial consistency with the sensitivity regions of rocky desertification. At the prefecture level, the overlap rate of the two regions reached 85%. Because of the influence of topography, vegetation coverage, population distribution, traffic accessibility and other factors, there were regions that consisted of combinations of high sensitivity but low research popularity regarding rocky desertification; these sites included Qionglai Mountain-Liangshan Area of Sichuan, Wushan-Shennongjia Area of Hubei, Hengduan Mountain Area of western Yunnan and Dupangling Area of southern Hunan. (3) The research hot regions and sensitive regions cannot be matched completely in time, space and concept. Therefore, we can use their spatial distribution differences to improve the pertinence of planning, governance and study of rocky desertification.
Journal Article
PRYSM: An open‐source framework for PRoxY System Modeling, with applications to oxygen‐isotope systems
2015
Paleoclimate observations constitute the only constraint on climate behavior prior to the instrumental era. However, such observations only provide indirect (proxy) constraints on physical variables. Proxy system models aim to improve the interpretation of such observations and better quantify their inherent uncertainties. However, existing models are currently scattered in the literature, making their integration difficult. Here, we present a comprehensive modeling framework for proxy systems, named PRYSM. For this initial iteration, we focus on water‐isotope based climate proxies in ice cores, corals, tree ring cellulose, and speleothem calcite. We review modeling approaches for each proxy class, and pair them with an isotope‐enabled climate simulation to illustrate the new scientific insights that may be gained from this framework. Applications include parameter sensitivity analysis, the quantification of archive‐specific processes on the recorded climate signal, and the quantification of how chronological uncertainties affect signal detection, demonstrating the utility of PRYSM for a broad array of climate studies. Key Points: A new modeling framework for paleoclimate proxies is proposed (PRYSM) PRYSM bridges the gap between GCMs and paleoclimate observations PRYSM may improve interpretation and uncertainty quantification of paleodata
Journal Article
Visualizing inconsistency in network meta-analysis by independent path decomposition
by
König, Jochem
,
Krahn, Ulrike
,
Binder, Harald
in
Antidepressants
,
Antidepressive Agents - therapeutic use
,
Comparative analysis
2014
Background
In network meta-analysis, several alternative treatments can be compared by pooling the evidence of all randomised comparisons made in different studies. Incorporated indirect conclusions require a consistent network of treatment effects. An assessment of this assumption and of the influence of deviations is fundamental for the validity evaluation.
Methods
We show that network estimates for single pairwise treatment comparisons can be approximated by the evidence of a subnet that is decomposable into independent paths. Path-based estimates and the estimate of the residual evidence can be used with their contribution to the network estimate to set up a forest plot for the consistency assessment. Using a network meta-analysis of twelve antidepressants and controlled perturbations in the real and constructed consistent data, we discuss the consistency assessment by the independent path decomposition in contrast to an approach using a recently presented graphical tool, the net heat plot. In addition, we define influence functions that describe how changes in study effects are translated into network estimates.
Results
While the consistency assessment by the net heat plot comprises all network estimates, an independent path decomposition and visualisation in a forest plot is tailored to one specific treatment comparison. It allows for the recognition as to whether inconsistencies between different paths of evidence and outlier effects do affect the considered treatment comparison.
Conclusions
The approximation of the network estimate for a single comparison by the evidence of a subnet and the visualisation of the decomposition into independent paths provide the applicability of a graphical validation instrument that is known from classical meta-analysis.
Journal Article
Profiling the transcriptomic signatures and identifying the patterns of zygotic genome activation – a comparative analysis between early porcine embryos and their counterparts in other three mammalian species
2022
Background
The transcriptional changes around zygotic genome activation (ZGA) in preimplantation embryos are critical for studying mechanisms of embryonic developmental arrest and searching for key transcription factors. However, studies on the transcription profile of porcine ZGA are limited.
Results
In this study, we performed RNA sequencing in porcine in vivo developed (IVV) and somatic cell nuclear transfer (SCNT) embryo at different stages and compared the transcriptional activity of porcine embryos with mouse, bovine and human embryos. The results showed that the transcriptome map of the early porcine embryos was significantly changed at the 4-cell stage, and 5821 differentially expressed genes (DEGs) in SCNT embryos failed to be reprogrammed or activated during ZGA, which mainly enrichment to metabolic pathways.
c-MYC
was identified as the highest expressed transcription factor during ZGA. By treating with 10,058-F4, an inhibitor of
c-MYC
, the cleavage rate (38.33 ± 3.4%) and blastocyst rate (23.33 ± 4.3%) of porcine embryos were significantly lower than those of the control group (50.82 ± 2.7% and 34.43 ± 1.9%). Cross-species analysis of transcriptome during ZGA showed that pigs and bovines had the highest similarity coefficient in biological processes. KEGG pathway analysis indicated that there were 10 co-shared pathways in the four species.
Conclusions
Our results reveal that embryos with impaired developmental competence may be arrested at an early stage of development. c-MYC helps promote ZGA and preimplantation embryonic development in pigs. Pigs and bovines have the highest coefficient of similarity in biological processes during ZGA. This study provides an important reference for further studying the reprogramming regulatory mechanism of porcine embryos during ZGA.
Journal Article
Qualitative analysis techniques for the review of the literature
by
Onwuegbuzie, Anthony J
,
Leech, Nancy L
,
Collins, Kathleen M.T
in
Comparative Analysis
,
Componential Analysis
,
Data Analysis
2012
In this article, we provide a framework for analyzing and interpreting sources that inform a literature review or, as it is more aptly called, a research synthesis. Specifically, using Leech and Onwuegbuzie's (2007, 2008) frameworks, we delineate how the following four major source types inform research syntheses: talk, observations, drawings/photographs/videos, and documents. We identify 17 qualitative data analysis techniques that are optimal for analyzing one or more of these source types. Further, we outline the role that the following five qualitative data analysis techniques can play in the research synthesis: constant comparison analysis, domain analysis, taxonomic analysis, componential analysis, and theme analysis. We contend that our framework represents a first step in an attempt to help literature reviewers analyze and interpret literature in an optimally rigorous way. Keywords: Review of the Literature, Research Synthesis, Qualitative Analysis, Constant Comparison Analysis, Domain Analysis, Taxonomic Analysis, Componential Analysis, Theme Analysis
Journal Article
Mineral Composition of Cereal and Cereal-Free Dry Dog Foods versus Nutritional Guidelines
by
Witkowicz, Robert
,
Biel, Wioletta
,
Kazimierska, Katarzyna
in
Animal Feed - analysis
,
Animal Nutritional Physiological Phenomena
,
Animals
2020
The aims of the present work are to estimate the nutritional value and to evaluate and compare the levels of macroelements (Ca, P, K, Na, Mg), microelements (Fe, Zn, Mn, Cu), heavy metals (Co, Cd, Pb, Mo, Cr, Ni), and their ratios in extruded complete foods for adult dogs, their compatibility with nutritional guidelines, as well as food profile similarity. Basic composition was determined according to Association of Official Analytical Chemists (AOAC). Analyses for elements were performed using an atomic absorption spectrometer. All the evaluated dry dog foods met the minimum recommended levels for protein and fat. Eighteen tested dog foods (60%) did not meet at least one recommendation of nutritional guidelines. Four dog foods exceeded the legal limit of Fe and five foods exceeded the legal limit of Zn; in one of them, Zn level was almost twice higher. Dog foods with insect protein exceeded the legal limit for Mn content. Eight dog foods had an inappropriate Ca:P ratio. Heavy metals were below detection limit in all analyzed dog foods. The results seem to show the need for regular feed analyses of the elemental composition in raw materials before introducing supplementation and for the monitoring of the mineral composition of finished pet food.
Journal Article
Systematic analysis of nucleation-dependent polymerization reveals new insights into the mechanism of amyloid self-assembly
2008
Self-assembly of misfolded proteins into ordered fibrillar aggregates known as amyloid results in numerous human diseases. Despite an increasing number of proteins and peptide fragments being recognised as amyloidogenic, how these amyloid aggregates assemble remains unclear. In particular, the identity of the nucleating species, an ephemeral entity that defines the rate of fibril formation, remains a key outstanding question. Here, we propose a new strategy for analyzing the self-assembly of amyloid fibrils involving global analysis of a large number of reaction progress curves and the subsequent systematic testing and ranking of a large number of possible assembly mechanisms. Using this approach, we have characterized the mechanism of the nucleation-dependent formation of β₂-microglobulin (β₂m) amyloid fibrils. We show, by defining nucleation in the context of both structural and thermodynamic aspects, that a model involving a structural nucleus size approximately the size of a hexamer is consistent with the relatively small concentration dependence of the rate of fibril formation, contrary to expectations based on simpler theories of nucleated assembly. We also demonstrate that fibril fragmentation is the dominant secondary process that produces higher apparent cooperatively in fibril formation than predicted by nucleated assembly theories alone. The model developed is able to explain and predict the behavior of β₂m fibril formation and provides a rationale for explaining generic properties observed in other amyloid systems, such as fibril growth acceleration and pathway shifts under agitation.
Journal Article
Visual analysis of geospatial habitat suitability model based on inverse distance weighting with paired comparison analysis
by
Varatharajan, R
,
Gunasekaran Manogaran
,
Barna, Cornel
in
Climate models
,
Decision analysis
,
Decision making
2018
Geospatial data analytical model is developed in this paper to model the spatial suitability of malaria outbreak in Vellore, Tamil Nadu, India. In general, Disease control strategies are only the spatial information like landscape, weather and climate, but also spatially explicit information like socioeconomic variable, population density, behavior and natural habits of the people. The spatial multi-criteria decision analysis approach combines the multi-criteria decision analysis and geographic information system (GIS) to model the spatially explicit and implicit information and to make a practical decision under different scenarios and different environment. Malaria is one of the emerging diseases worldwide; the cause of malaria is weather & climate condition of the study area. The climate condition is often called as spatially implicit information, traditional decision-making models do not use the spatially implicit information it most often uses spatially explicit information such as socio-economic, natural habits of the people. There is need to develop an integrated approach that consists of spatially implicit and explicit information. The proposed approach is used to identity an effective control strategy that prevents and control of malaria. Inverse Distance Weighting (IDW) is a type of deterministic method used in this paper to assign the weight values based on the neighborhood locations. ArcGIS software is used to develop the geospatial habitat suitability model.
Journal Article
Multi-Sensor Data Fusion Solutions for Blind and Visually Impaired: Research and Commercial Navigation Applications for Indoor and Outdoor Spaces
by
Tsiligkos, Kleomenis
,
Meliones, Apostolos
,
Theodorou, Paraskevi
in
Adaptive technology
,
assistive technologies
,
Blindness
2023
Several assistive technology solutions, targeting the group of Blind and Visually Impaired (BVI), have been proposed in the literature utilizing multi-sensor data fusion techniques. Furthermore, several commercial systems are currently being used in real-life scenarios by BVI individuals. However, given the rate by which new publications are made, the available review studies become quickly outdated. Moreover, there is no comparative study regarding the multi-sensor data fusion techniques between those found in the research literature and those being used in the commercial applications that many BVI individuals trust to complete their everyday activities. The objective of this study is to classify the available multi-sensor data fusion solutions found in the research literature and the commercial applications, conduct a comparative study between the most popular commercial applications (Blindsquare, Lazarillo, Ariadne GPS, Nav by ViaOpta, Seeing Assistant Move) regarding the supported features as well as compare the two most popular ones (Blindsquare and Lazarillo) with the BlindRouteVision application, developed by the authors, from the standpoint of Usability and User Experience (UX) through field testing. The literature review of sensor-fusion solutions highlights the trends of utilizing computer vision and deep learning techniques, the comparison of the commercial applications reveals their features, strengths, and weaknesses while Usability and UX demonstrate that BVI individuals are willing to sacrifice a wealth of features for more reliable navigation.
Journal Article