Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
1,997
result(s) for
"ANOVA variance"
Sort by:
Statistical Analysis of the Spatial Distribution of Multi-Elements in an Island Arc Region: Complicating Factors and Transfer by Water Currents
2017
The compositions and transfer processes affecting coastal sea sediments from the Seto Inland Sea and the Pacific Ocean are examined through the construction of comprehensive terrestrial and marine geochemical maps for western Japan. Two-way analysis of variance (ANOVA) suggests that the elemental concentrations of marine sediments vary with particle size, and that this has a greater effect than the regional provenance of the terrestrial material. Cluster analysis is employed to reveal similarities and differences in the geochemistry of coastal sea and stream sediments. This analysis suggests that the geochemical features of fine sands and silts in the marine environment reflect those of stream sediments in the adjacent terrestrial areas. However, gravels and coarse sands do not show this direct relationship, which is likely a result of mineral segregation by strong tidal currents and the denudation of old basement rocks. Finally, the transport processes for the fine-grained sediments are discussed, using the spatial distribution patterns of outliers for those elements enriched in silt and clay. Silty and clayey sediments are found to be transported and dispersed widely by a periodic current in the inner sea, and are selectively deposited at the boundary of different water masses in the outer sea.
Journal Article
Automatic Epileptic Seizure Detection in EEG Signals Using Multi-Domain Feature Extraction and Nonlinear Analysis
by
Wang, Lina
,
Xue, Weining
,
Luo, Meilin
in
analysis of variance (ANOVA)
,
epileptic seizure detection
,
nonlinear analysis
2017
Epileptic seizure detection is commonly implemented by expert clinicians with visual observation of electroencephalography (EEG) signals, which tends to be time consuming and sensitive to bias. The epileptic detection in most previous research suffers from low power and unsuitability for processing large datasets. Therefore, a computerized epileptic seizure detection method is highly required to eradicate the aforementioned problems, expedite epilepsy research and aid medical professionals. In this work, we propose an automatic epilepsy diagnosis framework based on the combination of multi-domain feature extraction and nonlinear analysis of EEG signals. Firstly, EEG signals are pre-processed by using the wavelet threshold method to remove the artifacts. We then extract representative features in the time domain, frequency domain, time-frequency domain and nonlinear analysis features based on the information theory. These features are further extracted in five frequency sub-bands based on the clinical interest, and the dimension of the original feature space is then reduced by using both a principal component analysis and an analysis of variance. Furthermore, the optimal combination of the extracted features is identified and evaluated via different classifiers for the epileptic seizure detection of EEG signals. Finally, the performance of the proposed method is investigated by using a public EEG database at the University Hospital Bonn, Germany. Experimental results demonstrate that the proposed epileptic seizure detection method can achieve a high average accuracy of 99.25%, indicating a powerful method in the detection and classification of epileptic seizures. The proposed seizure detection scheme is thus hoped to eliminate the burden of expert clinicians when they are processing a large number of data by visual observation and to speed-up the epilepsy diagnosis.
Journal Article
The Versatility of the Taguchi Method: Optimizing Experiments Across Diverse Disciplines
The Taguchi method, a robust experimental design technique, establishes a strong connection between input and output variables. Known for its capacity to yield precise results with fewer trials and minimized errors, this method has gained widespread application in various fields such as engineering, physics, chemistry, economics, finance, and more. In this paper, the authors examine the importance of the Taguchi orthogonal array method, its step-by-step optimization procedure, and its potential for future applications. Through a thorough literature review, the authors investigate how the Taguchi method has been effectively employed to identify key factors influencing response variables. The versatility of the Taguchi method becomes apparent when considering its applications across diverse disciplines. Researchers in engineering have successfully utilized this technique to optimize processes and enhance product quality. Furthermore, in scientific fields like physics and chemistry, the Taguchi method has proven invaluable for conducting experiments efficiently, resulting in more accurate and reproducible outcomes. Researchers gain critical insights into the effects of factors on the response variable by employing statistical tools such as mean analysis, variance analysis, and signal-to-noise ratio. The Taguchi method remains a valuable and broadly applicable tool for optimizing experiments and identifying influential factors across multiple disciplines. This paper’s extensive literature review emphasizes its significance in various fields and outlines the step-by-step procedure to leverage its potential for optimization.
Journal Article
A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments
by
Ewing, Sarah J.
,
Dietz, Geoffrey D.
,
Ganger, Michael T.
in
Algorithms
,
Analysis of variance (ANOVA)
,
Bioinformatics
2017
Background
qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed.
Results
Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (
C
q
) and efficiencies of reactions (
E
). The Common Base Method keeps all calculations in the logscale as long as possible by working with log
10
(
E
) ∙
C
q
, which we call the efficiency-weighted
C
q
value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted
C
q
values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation.
Conclusions
The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.
Journal Article
Investigation on the Application of Artificial Intelligence in Prosthodontics
by
Minervini, Giuseppe
,
Chaturvedi, Saurabh
,
Alshadidi, Abdulkhaliq Ali F.
in
Algorithms
,
ANOVA variance
,
Artificial intelligence
2023
Artificial intelligence (AI) is a contemporary, information-driven innovative technology. Prosthetic dentistry, also known as prosthodontics, is the restoration and reconstruction of missing teeth utilizing implants for permanent and removable prostheses. It enhances healthy soft and hard tissues, promoting oral health. This study examined the use of artificial intelligence in prosthodontics to diagnose abnormalities and create patient-specific prostheses. Two researchers searched Google Scholar, Scopus, PubMed/MEDLINE, EBSCO host, Science Direct, and Web of Science (MEDLINE, WOS, and KJD). Articles on AI in English were reviewed. We also collected the following broad article aspects: research and control groups, assessment methodology, outcomes, and quality rankings. This methodological study examined AI use in prosthodontics using the latest scientific findings. The findings were statistically evaluated using ANOVA. Titles and abstracts revealed 172 AI-related dentistry studies, which were analyzed in this research. Thirty-eight papers were eliminated. According to the evaluation, AI was found to have significantly increased in prosthodontics. Despite the vast number of studies documenting AI applications, the description of the data illustrated the latest breakthroughs in AI in prosthodontics, highlighting its use in automatically produced diagnostics, predicting analytics, and classification or verification tools.
Journal Article
Electrochemical-Thermal Modelling and Optimisation of Lithium-Ion Battery Design Parameters Using Analysis of Variance
by
Marco, James
,
Hosseinzadeh, Elham
,
Jennings, Paul
in
analysis of variance (ANOVA)
,
design optimisation
,
Electrodes
2017
A 1D electrochemical-thermal model of an electrode pair of a lithium ion battery is developed in Comsol Multiphysics. The mathematical model is validated against the literature data for a 10 Ah lithium phosphate (LFP) pouch cell operating under 1 C to 5 C electrical load at 25 °C ambient temperature. The validated model is used to conduct statistical analysis of the most influential parameters that dictate cell performance; i.e., particle radius ( r p ); electrode thickness ( L p o s ); volume fraction of the active material ( ε s , p o s ) and C-rate; and their interaction on the two main responses; namely; specific energy and specific power. To achieve an optimised window for energy and power within the defined range of design variables; the range of variation of the variables is determined based on literature data and includes: r p : 30–100 nm; L p o s : 20–100 μm; ε s , p o s : 0.3–0.7; C-rate: 1–5. By investigating the main effect and the interaction effect of the design variables on energy and power; it is observed that the optimum energy can be achieved when (rp < 40 nm); (75 μm < Lpos < 100 μm); (0.4 < εs,pos < 0.6) and while the C-rate is below 4C. Conversely; the optimum power is achieved for a thin electrode ( L p o s < 30 μm); with high porosity and high C-rate (5 C).
Journal Article
Application of the Analysis of Variance (ANOVA) in the Interpretation of Power Transformer Faults
by
Thango, Bonginkosi A.
in
Analysis of variance
,
analysis of variance (ANOVA)
,
descriptive statistics
2022
Electrical power transformers are the most exorbitant and tactically prominent components of the South African electrical power grid. In contrast, they are burdened by internal winding faults predominantly on account of insulation system failure. It is essential that these faults must be swiftly and precisely uncovered and suitable measures should be adopted to separate the faulty unit from the entire system. The frequency response analysis (FRA) is a technique for tracking a transformer’s mechanical integrity. Nevertheless, classifying the category of the fault and its gravity by benchmarking measured FRA responses is still backbreaking and for the most part, anchored in personnel proficiency. This work presents a quantum leap to normalize the FRA interpretation procedure by suggesting an interpretation code criteria based on an empirical survey of transformers ranging from 315 kVA to 40 MVA. The study then proposes an analysis of variance (ANOVA) based interpretation tool for diagnosing the statistical significance of FRA fingerprint and measured profiles. The latter cannot be relied upon by an expert or by the naked eye. Additionally, descriptive FRA frequency sub-region data statistics are proposed to evaluate the shift in both the magnitude and measuring frequency characteristics to formulate the recommended interpretation code criteria. To corroborate the code criteria by incorporating ANOVA and descriptive statistics, the study presents various case studies with unknown FRA profiles for fault diagnosis. The results constitute proof of the reliability of the proposed code criteria and a proposed hybrid of ANOVA and descriptive statistics.
Journal Article
On machining of Ti-6Al-4V using multi-walled carbon nanotubes-based nano-fluid under minimum quantity lubrication
by
Deiab, I.
,
Hegab, H.
,
Gadallah, M. H.
in
Additives
,
CAE) and Design
,
Computer-Aided Engineering (CAD
2018
Titanium alloys are the primary candidates in several applications due to its promising characteristics, such as high strength to weight ratio, high yield strength, and high wear resistance. Despite its superior performance, some inherent properties, such as low thermal conductivity and high chemical reactivity lead to poor machinability and result in premature tool failure. In order to overcome the heat dissipation challenge during machining of titanium alloys, nano-cutting fluids are utilized as they offer higher observed thermal conductivity values compared to the base oil. The objective of this work is to investigate the effects of multi-walled-carbon nanotubes (MWCNTs) cutting fluid during cutting of Ti-6Al-4V. The investigations are carried out to study the induced surface quality under different cutting design variables including cutting speed, feed rate, and added nano-additive percentage (wt%). The novelty here lies on enhancing the MQL heat capacity using nanotubes-based fluid in order to improve Ti-6Al-4V machinability. Analysis of variance (ANOVA) has been implemented to study the effects of the studied design variables on the machining performance. It was found that 4 wt% MWCNTs nano-fluid decreases the surface roughness by 38% compared to the tests performed without nano-additives, while 2 wt% MWCNTs nano-fluids improve the surface quality by 50%.
Journal Article
A novel sustainable multi-objective optimization model for forward and reverse logistics system under demand uncertainty
by
Zarbakhshnia Navid
,
Kannan Devika
,
Soleimani Hamed
in
Algorithms
,
Constraint modelling
,
Environmental impact
2020
The paper aims to present a multi-product, multi-stage, multi-period, and multi-objective, probabilistic mixed-integer linear programming model for a sustainable forward and reverse logistics network problem. It looks at original and return products to determine both flows in the supply chain—forward and reverse—simultaneously. Besides, to establish centres of forward and reverse logistics activities and make a decision for transportation strategy in a more close-to-real manner, the demand is considered uncertain. We attempt to represent all major dimensions in the objective functions: First objective function is minimizing the processing, transportation, fixed establishing cost and costs of CO2 emission as environmental impacts. Furthermore, the processing time of reverse logistics activities is developed as the second objective function. Finally, in the third objective function, it is tried to maximize social responsibility. Indeed, a complete sustainable approach is developed in this paper. In addition, this model provides novel environmental constraint and social matters in the objective functions as its innovation and contribution. Another contribution of this paper is using probabilistic programming to manage uncertain parameters. Moreover, a non-dominated sorting genetic algorithm (NSGA-II) is configured to achieve Pareto front solutions. The performance of the NSGA-II is compared with a multi-objective particle swarm optimization (MOPSO) by proposing 10 appropriate test problems according to five comparison metrics using analysis of variance (ANOVA) to validate the modeling approach. Overall, according to the results of ANOVA and the comparison metrics, the performance of NSGA-II algorithm is more satisfying compared with that of MOPSO algorithm.
Journal Article
Optimization of process parameters for polishing aero-engine blade with abrasive cloth wheel considering spindle vibration and polished roughness
2024
The vibration of machine tool spindle is very important for machining. Firstly, in order to explore the relationship between spindle vibration and process parameters, the spindle vibration acceleration when polishing aero-engine blade was measured, and the quadratic polynomial models between the spindle vibration acceleration in X, Y and Z direction and the process parameters were established. Secondly, a quadratic polynomial model for the influence of process parameters on polished surface roughness was also determined. Thirdly, through the Analysis of Variance (ANOVA) and main effect analysis, it can be seen that the spindle speed has the greatest impact on the vibration. Finally, a multi-objective optimization model was established with the optimization objective of minimizing spindle vibration acceleration and polished surface roughness, and the optimal process parameters were solved using genetic algorithm. The optimal process parameters were verified and the results show that the polished surface roughness with the optimal process parameters are all less than 0.4 μm, and the deviation rates between the theoretical optimization results of spindle vibration acceleration and the experimental results are all less than 10%, indicating that the optimization results are good.
Journal Article