Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
393 result(s) for "explanatory analyses"
Sort by:
Soft-computing models for predicting plastic viscosity and interface yield stress of fresh concrete
Interface yield stress and plastic viscosity of fresh concrete significantly influences its pumping ability. The accurate determination of these properties needs extensive testing on-site which results in time and resource wastage. Thus, to speed up the process of accurately determining these concrete properties, this study tends to use four machine learning (ML) algorithms including Random Forest Regression (RFR), Gene Expression Programming (GEP), K-nearest Neighbor (KNN), Extreme Gradient Boosting (XGB) and a statistical technique Multi Linear Regression (MLR) to develop predictive models for plastic viscosity and interface yield stress of concrete. Out of all employed algorithms, only GEP expressed its output in the form of an empirical equation. The models were developed using data from published literature having six input parameters including cement, water, time after mixing etc. and two output parameters i.e., plastic viscosity and interface yield stress. The performance of the developed algorithms was assessed using several error metrices, k-fold validation, and residual assessment etc. The comparison of results revealed that XGB is the most accurate algorithm to predict plastic viscosity (training , testing ) and interface yield stress (training , testing ). To get increased insights into the model prediction process, shapely and individual conditional expectation analyses were carried out on the XGB algorithm which highlighted that water, cement, and time after mixing are the most influential parameters to estimate both fresh properties of concrete. In addition, a graphical user interface has been made to efficiently implement the findings of this study in the civil engineering industry.
Enhanced Forecasting and Assessment of Urban Air Quality by an Automated Machine Learning System: The AI‐Air
An automated air quality forecasting system (AI‐Air) was developed to optimize and improve air quality forecasting for different typical cities, combined with the China Meteorological Administration Unified Atmospheric Chemistry Environmental Model (CUACE), and used in a typical inland city of Zhengzhou and a coastal city of Haikou in China. The performance evaluation results show that for the PM2.5 forecasts, the correlation coefficient (R) is increased by 0.07–0.13, and the mean error (ME) and root mean square error (RMSE) is decreased by 3.2–3.5 and 3.8–4.7 μg/m³. Similarly, for the O3 forecasts, the R value is improved by 0.09–0.44, and ME and RMSE values are reduced by 7.1–22.8 and 9.0–25.9 μg/m³, respectively. Case analyses of operational forecasting also indicate that the AI‐Air system can significantly improve the forecasting performance of pollutant concentrations and effectively correct underestimation, or overestimation phenomena compared to the CUACE model. Additionally, explanatory analyses were performed to assess the key meteorological factors affecting air quality in cities with different topographic and climatic conditions. The AI‐Air system highlights the potential of AI techniques to improve forecast accuracy and efficiency, and with promising applications in the field of air quality forecasting. Plain Language Summary Currently, artificial intelligence (AI) technology provides an innovative technological way to solve air quality problems with its excellent capability. This work develops an advanced automated air quality forecasting system (AI‐Air), based on the China Meteorological Administration Unified Atmospheric Chemical Environmental Model (CUACE). By comparing the forecasting results with the existing numerical models, the AI‐Air system shows its excellent performance in both overall performance evaluation and case‐specific forecasting. The AI‐Air system not only surpasses the conventional methods in forecasting accuracy but also demonstrates its fine forecasting ability in detail. In addition, this study provides an in‐depth discussion of the key factors affecting air quality in different types of cities and conducts a feature importance analysis. This analysis deepens the understanding of the intrinsic mechanisms of air quality changes in different urban environments and provides a scientific basis for formulating more precise air quality management strategies. Overall, the development and application of the AI‐Air system not only improves the science and accuracy of air quality prediction, but also provides strong technical support for urban environmental management and policy formulation. Key Points An automated ML system (AI‐Air) is developed for urban air quality forecasting Operational analyses show effective correction of under‐/overestimation phenomena Explanatory analyses explore key influencing factors in inland and coastal cities
Using Sieve of Eratosthenes for the Factor Analysis of Neutrosophic Form of the Five Facet Mindfulness Questionnaire as an Alternative Confirmatory Factor Analysis
In this study, the Five Facet Mindfulness Questionnaire which was adapted from the short form of the Five Facet Mindfulness Questionnaire was evaluated and this scale into neutrosophic form was converted and the results of the scale were compared for proposing new type confirmatory analysis procedure as well as developing neutrosophic scales. The exploratory factor analysis was used in the analysis of the data. Besides, test results were analyzed for Kaiser–Meyer–Olkin and Bartlett values, common factor variance values, scree plot graphs, and the principal component analysis results. The sample of the study consists of 194 students in mathematics departments at Bitlis Eren University and I˘gdır University in Turkey by convenience sampling method. A convenience sampling is a kind of non-probability sampling procedure in which the sample is obtained from a group of individuals easily accessible or reachable. The convenience sampling method was chosen in this study because the study aims to examine the structure of the measurement tool rather than the psychological characteristics of a particular population. First of all, it is observed that if any classical scale can be converted into a neutrosophic one. It is observed that the sub-dimensions of a neutrosophic scale as agree, disagree, and undecided might not have a similar factor structure to the classical one. Interestingly, in the factor analysis of the neutrosophic scale, both classical and the agreement part of the neutrosophic scales have the same factors, implying that the one-dimensional classical scale measures the agreement degree of the participants. When the factor analysis was conducted to disagreement and vagueness dimensions, it seemed that some factors were eliminated and even some new factors emerged, indicating that in human cognition those three dimensions can be taken as independent of each other, just as assumed by neutrosophic logic. The another important implication of the factor analysis is that the neutrosophic forms of any questionnaire can be used for the validity of the classical ones. Loads of items or their accumulation into factors are compared to the classical scale and the three-dimensional neutrosophic scale in the factor, so that the corresponding ones in the same factors and the items or factors that do not correspond to each other are eliminated. It is very similar to the Sieve of Eratosthenes, which is an ancient algorithm for finding prime numbers up to any given limit where each prime is taken as an independent base or dimension and multiples of the selected primes in a given interval are eliminated until there are only prime numbers left. Finally, the reliability of three independent dimensions of the neutrosophic forms of any questionnaire can also be used to check whether the measurement tool is reliable. Low-reliability results in any dimensions may imply that the scale has some problems in terms of meaning, language, or other factors.
Quantification of collective behaviour via causality analysis
Terms such as leader , mediator , and follower sound equal in the description of a pack of wolves, a street protest crowd, or a business team and have very similar meanings. This indicates the presence of some general law or structure that governs collective behaviour. To reveal this, we selected the most common parameter for most levels of the organisation—motion. A causality analysis of distance correlations was performed to obtain follow-up networks that show who follows whom and how strongly. These networks characterise an observed system in general and work as an automation bridge between the biological experiment and the broad field of network analysis. The proposed method was tested on 3D image data from a controlled experiment on a 6-member school of aquarium fish of Tiger Barb. The network patterns can be easily ethologically interpreted and agreed with expected behaviour.
Predicting Agitation-Sedation Levels in Intensive Care Unit Patients: Development of an Ensemble Model
Agitation and sedation management is critical in intensive care as it affects patient safety. Traditional nursing assessments suffer from low frequency and subjectivity. Automating these assessments can boost intensive care unit (ICU) efficiency, treatment capacity, and patient safety. The aim of this study was to develop a machine-learning based assessment of agitation and sedation. Using data from the Taichung Veterans General Hospital ICU database (2020), an ensemble learning model was developed for classifying the levels of agitation and sedation. Different ensemble learning model sequences were compared. In addition, an interpretable artificial intelligence approach, SHAP (Shapley additive explanations), was employed for explanatory analysis. With 20 features and 121,303 data points, the random forest model achieved high area under the curve values across all models (sedation classification: 0.97; agitation classification: 0.88). The ensemble learning model enhanced agitation sensitivity (0.82) while maintaining high AUC values across all categories (all >0.82). The model explanations aligned with clinical experience. This study proposes an ICU agitation-sedation assessment automation using machine learning, enhancing efficiency and safety. Ensemble learning improves agitation sensitivity while maintaining accuracy. Real-time monitoring and future digital integration have the potential for advancements in intensive care.
A simplified measure of burnout symptoms among paramedics - an exploratory analysis of a Hungarian sample
Background Burnout is still one of the leading mental health problems. According to research results over the past decades, healthcare workers, including paramedics, are considered a high-risk group. In concordance with these results, the available resources need to prioritize monitoring paramedics’ mental health. Methods In our study, we investigated whether the available test batteries measuring burnout could be reduced while maintaining their effectiveness. We reduced the 21-item Burnout Measurement and the 8-item version of the Psychosomatic Symptom Scale using the data of 727 Hungarian paramedics. We selected the top four items of the questionnaires that were significantly correlated with the original Burnout Measure Index and the Psychosomatic Scale Index. The classification efficiency of the shortened list of items was based on the initial risk categories of the Burnout Measure and its sensitivity was analyzed using Binary Logistic regression and ROC curves. We then used Two-Step Cluster Analysis to test the ability of the shortened Burnout Measure Index to develop new risk categories. The reliability indicators were also explored. Results The results show that the Burnout Measurement can be reduced to 4 items with a classification efficiency of 93.5% in determining the level of burnout. The 5-item reduction of the Psychosomatic Symptom Scale can classify subjects to the appropriate intervention level for burnout with an efficiency of 81.6%. The ROC analysis suggests that the shortened questionnaires have an excellent separative ability to discriminate between the initial risk groups. Three new risk categories were also identified as a result of the cluster analysis. Conclusion The shortened scales may be proven effective in resource management, which could significantly quicken the assessment of burnout in the future. The abbreviated scale is also suitable for classifying subjects into risk categories. However, further research is needed to see whether the shortened scales can be used as a diagnostic tool.
Re-Evaluating Components of Classical Educational Theories in AI-Enhanced Learning: An Empirical Study on Student Engagement
The primary goal of this research was to empirically identify and validate the factors influencing student engagement in a learning environment where AI-based chat tools, such as ChatGPT or other large language models (LLMs), are intensively integrated into the curriculum and teaching–learning process. Traditional educational theories provide a robust framework for understanding diverse dimensions of student engagement, but the integration of AI-based tools offers new personalized learning experiences, immediate feedback, and resource accessibility that necessitate a contemporary exploration of these foundational concepts. Exploratory Factor Analysis (EFA) was utilized to uncover the underlying factor structure within a large set of variables, and Confirmatory Factor Analysis (CFA) was employed to verify the factor structure identified by EFA. Four new factors have been identified: “Academic Self-Efficacy and Preparedness”, “Autonomy and Resource Utilization”, “Interest and Engagement”, and “Self-Regulation and Goal Setting.” Based on these factors, a new engagement measuring scale has been developed to comprehensively assess student engagement in AI-enhanced learning environments.
Identifying Factors Influencing the Adoption of CIFRS/CIFRS for SMEs in Cambodia
The data collected from the survey in this study revealed that a total of 2431 firms successfully filled out and returned the questionnaire. Among these, 73.79% were categorized as non-adopters of CIFRS, whereas 26.21% were recognized as adopters of CIFRS. The findings derived from the logistic regression model indicated that the latent variables, including financial reporting components, stakeholder knowledge and attitudes, the internal control system, and the costs related to the implementation of CIFRS, exerted a highly positive and statistically significant impact on the probability of adopting CIFRS in Cambodia at the 1% significance level. However, the variable concerning financial reporting components demonstrated significance only at the 5% level. It is important to mention that the estimated latent variables were determined using explanatory factor analysis (EFA). The estimated coefficients for the control variables were 0.997 for number of employees, 1.243 for assets, 5.581 for filing financial reports with ACAR, -0.725 for ACAR's enforcement of laws related to CIFRS implementation, and -0.644 for inconsistencies in financial information provided by CIFRS compared to standards set by the General Department of Taxation. Each parameter associated with the control variables had statistical significance at the 1% level.
Impact of COVID-19 on Urban Mobility and Parking Demand Distribution: A Global Review with Case Study in Melbourne, Australia
The tremendous impact of the novel coronavirus (COVID-19) on societal, political, and economic rhythms has given rise to a significant overall shift from pre- to post-pandemic policies. Restrictions, stay-at-home regulations, and lockdowns have directly influenced day-to-day urban transportation flow. The rise of door-to-door services and the demand for visiting medical facilities, grocery stores, and restaurants has had a significant impact on urban transportation modal demand, further impacting zonal parking demand distribution. This study reviews the overall impacts of the pandemic on urban transportation with respect to a variety of policy changes in different cities. The parking demand shift was investigated by exploring the during- and post-COVID-19 parking policies of distinct metropolises. The detailed data related to Melbourne city parking, generated by the Internet of things (IoT), such as sensors and devices, are examined. Empirical data from 2019 (16 March to 26 May) and 2020 (16 March to 26 May) are explored in-depth using explanatory data analysis to demonstrate the demand and average parking duration shifts from district to district. The results show that the experimental zones of Docklands, Queensbery, Southbanks, Titles, and Princess Theatre areas have experienced a decrease in percentage change of vehicle presence of 29.2%, 36.3%, 37.7%, 23.7% and 40.9%, respectively. Furthermore, on-street level analysis of Princess Theatre zone, Lonsdale Street, Exhibition Street, Spring Street, and Little Bourke Street parking bays indicated a decrease in percentage change of vehicle presence of 38.7%, 56.4%, 12.6%, and 35.1%, respectively. In conclusion, future potential policymaking frameworks are discussed that could provide further guidance in stipulating epidemic prevention and control policies, particularly in relation to parking regulations during the pandemic.
Analysis of the Historically Compatibility of AI-Assisted Urban Furniture Design Using the Semantic Differentiation Method: The Case of Elazığ Harput
This study examined the historical compatibility of urban furniture in Harput Sarahatun Mosque Square, Elazığ, Türkiye. It evaluated AI-generated modern and classical-style alternatives using the Semantic Differentiation Method. The research aimed to compare existing furniture with AI-assisted designs and identify key attributes influencing historical and spatial integration. The methodology consists of four stages: (1) defining adjective pairs to assess historical compatibility through expert opinions and literature review; (2) photographing existing urban furniture and generating AI-assisted modern and classical-style urban furniture (benches, trash cans, and lighting elements); (3) determination expert opinion using the survey; (4) statistical analysis of results through descriptive statistics and explanatory factor analysis (EFA). The study, which was conducted online in February 2025, involved 31 experts from the architecture and landscape architecture disciplines. The findings show that existing furniture is mainly judged by practicality and usability, with limited attention to historical integration. Modern AI-generated designs emphasize innovation, minimalism, and contemporary aesthetics. In contrast, classical-style AI-generated furniture is appreciated for its historical compatibility, cultural resonance, and aesthetic harmony. Experts favored classical alternatives for their alignment with traditional urban character. The results highlight the need for future designs to balance functionality, sustainability, and historical continuity, ensuring urban furniture contributes to cultural preservation and modern urban needs.