Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
636 result(s) for "Empirical validation"
Sort by:
Long-Term Projections of Cancer Incidence and Mortality in Japan and Decomposition Analysis of Changes in Cancer Burden, 2020–2054: An Empirical Validation Approach
Purpose: The aim of this study was to project new cancer cases/deaths forward to 2054, and decompose changes in cancer cases/deaths to assess the impact of demographic transitions on cancer burden. Methods: We collected data on cancer cases/deaths up to 2019, empirically validated the projection performance of multiple statistical models, and selected optimal models by applying time series cross-validation. Results: We showed an increasing number of new cancer cases but decreasing number of cancer deaths in both genders, with a large burden attributed to population aging. We observed the increasing incidence rates in most cancer sites but reducing rates in some infection-associated cancers, including stomach and liver cancers. Colorectal and lung cancers were projected to remain as leading cancer burdens of both incidence and mortality in Japan over 2020–2054, while prostate and female breast cancers would be the leading incidence burdens among men and women, respectively. Conclusions: Findings from decomposition analysis require more supportive interventions for reducing mortality and improving the quality of life of Japanese elders. We emphasize the important role of governments and policymakers in reforming policies for controlling cancer risk factors, including oncogenic infections. The rapid increase and continued presence of those cancer burdens associated with modifiable risk factors warrant greater efforts in cancer control programs, specifically in enhancing cancer screening and controlling cancer risk factors in Japan.
Empirical validation of the wcst network structure in patients
IntroductionCognitive impairment is a core feature of schizophrenia and other psychotic disorders and executive deficits are within the most impaired cognitive functions The Wisconsin Card Sorting test (WCST) has been extensively used in literature on schizophrenia and psychosis. The underlying structure of executive impairment may have important implications for our understanding of the complex connections between executive dysfunction and the psychopathology and neurofunctional basis of psychosis.ObjectivesThe objective was to empirically validate the dimensions of the WCST network structure of patients regarding antecedent, concurrent and outcome variables.MethodsSubjects were 298 patients with a DSM 5 diagnosis of psychotic disorder. To assess the empirical validation of network structure of the WCST antecedent, concurrent and outcome variables were selected from the CASH interview and other scales of patients.ResultsPearson coefficient correlations between the 4 network loadings (NL) of WCST, namely perseveration, inefficient sorting, failure to maintain the set and learning, and antecedent, concurrent and outcome validators are shown in the table. PER and IS showed common and strong associations with antecedent, concurrent and outcome validators. LNG dimension was also moderately associated and FMS did not show significant associations.Conclusions‘Perseveration’ and ‘Inefficient sorting’ dimensions achieve and share common antecedent, concurrent and outcome validators. While ‘Learning’ dimension achieves partial validation in terms of antecedent and outcome validators and ‘Failure to maintain the set’ dimension was not associated with external validators. These four underlying dysfunctions might help to disentangle the neurofunctional basis of executive deficits in psychosis.
Specificity and sensitivity of the fixed-point test for binary mixture distributions
When two cognitive processes contribute to a behavioral output—each process producing a specific distribution of the behavioral variable of interest—and when the mixture proportion of these two processes varies as a function of an experimental condition, a common density point should be present in the observed distributions of the data across said conditions. In principle, one can statistically test for the presence (or absence) of a fixed point in experimental data to provide evidence in favor of (or against) the presence of a mixture of processes, whose proportions are affected by an experimental manipulation. In this paper, we provide an empirical diagnostic of this test to detect a mixture of processes. We do so using resampling of real experimental data under different scenarios, which mimic variations in the experimental design suspected to affect the sensitivity and specificity of the fixed-point test (i.e., mixture proportion, time on task, and sample size). Resampling such scenarios with real data allows us to preserve important features of data which are typically observed in real experiments while maintaining tight control over the properties of the resampled scenarios. This is of particular relevance considering such stringent assumptions underlying the fixed-point test. With this paper, we ultimately aim at validating the fixed-point property of binary mixture data and at providing some performance metrics to researchers aiming at testing the fixed-point property on their experimental data.
Assessing Hierarchical Leisure Constraints Theory after Two Decades
This article assesses the status of hierarchical leisure constraints theory (Crawford & Godbey, 1987; Crawford, Jackson, & Godbey, 1991) regarding many issues. Such issues include clarification and elaboration of some aspects of the original model, a review of studies which have used or examined the model and the extent to which they are confirmatory, critiques of the original model by various authors, and avenues for further research. Conclusions drawn include that the model is cross culturally relevant, that the model may examine forms of behavior other than leisure, and that, while research to date has been largely confirmatory, there is a high potential for the theory to be expanded in order to advance leisure constraints research to the next level.
How to assess the accuracy of volume conduction models? A validation study with stereotactic EEG data
Volume conduction models of the human head are used in various neuroscience fields, such as for source reconstruction in EEG and MEG, and for modeling the effects of brain stimulation. Numerous studies have quantified the accuracy and sensitivity of volume conduction models by analyzing the effects of the geometrical and electrical features of the head model, the sensor model, the source model, and the numerical method. Most studies are based on simulations as it is hard to obtain sufficiently detailed measurements to compare to models. The recording of stereotactic EEG during electric stimulation mapping provides an opportunity for such empirical validation. In the study presented here, we used the potential distribution of volume-conducted artifacts that are due to cortical stimulation to evaluate the accuracy of finite element method (FEM) volume conduction models. We adopted a widely used strategy for numerical comparison, i.e., we fixed the geometrical description of the head model and the mathematical method to perform simulations, and we gradually altered the head models, by increasing the level of detail of the conductivity profile. We compared the simulated potentials at different levels of refinement with the measured potentials in three epilepsy patients. Our results show that increasing the level of detail of the volume conduction head model only marginally improves the accuracy of the simulated potentials when compared to sEEG measurements. The mismatch between measured and simulated potentials is, throughout all patients and models, maximally 40 microvolts (i.e., 10% relative error) in 80% of the stimulation-recording combination pairs and it is modulated by the distance between recording and stimulating electrodes. Our study suggests that commonly used strategies used to validate volume conduction models based solely on simulations might give an overly optimistic idea about volume conduction model accuracy. We recommend more empirical validations to be performed to identify those factors in volume conduction models that have the highest impact on the accuracy of simulated potentials. We share the dataset to allow researchers to further investigate the mismatch between measurements and FEM models and to contribute to improving volume conduction models.
An Empirical Social Vulnerability Map for Flood Risk Assessment at Global Scale (“GlobE‐SoVI”)
Fatalities caused by natural hazards are driven not only by population exposure, but also by their vulnerability to these events, determined by intersecting characteristics such as education, age and income. Empirical evidence of the drivers of social vulnerability, however, is limited due to a lack of relevant data, in particular on a global scale. Consequently, existing global‐scale risk assessments rarely account for social vulnerability. To address this gap, we estimate regression models that predict fatalities caused by past flooding events (n = 913) based on potential social vulnerability drivers. Analyzing 47 variables calculated from publicly available spatial data sets, we establish five statistically significant vulnerability variables: mean years of schooling; share of elderly; gender income gap; rural settlements; and walking time to nearest healthcare facility. We use the regression coefficients as weights to calculate the “Global‐Empirical Social Vulnerability Index (GlobE‐SoVI)” at a spatial resolution of ∼1 km. We find distinct spatial patterns of vulnerability within and across countries, with low GlobE‐SoVI scores (i.e., 1–2) in for example, Northern America, northern Europe, and Australia; and high scores (i.e., 9–10) in for example, northern Africa, the Middle East, and southern Asia. Globally, education has the highest relative contribution to vulnerability (roughly 58%), acting as a driver that reduces vulnerability; all other drivers increase vulnerability, with the gender income gap contributing ∼24% and the elderly another 11%. Due to its empirical foundation, the GlobE‐SoVI advances our understanding of social vulnerability drivers at global scale and can be used for global (flood) risk assessments. Plain Language Summary Social vulnerability is rarely accounted for in global‐scale risk assessments. We develop an empirical social vulnerability map (“GlobE‐SoVI”) based on five key drivers of social vulnerability to flooding, that is, education, elderly, income inequality, rural settlements and travel time to healthcare, which we establish based on flood fatalities caused by past flooding events. Globally, we find education to have a high and reducing effect on social vulnerability, while all other drivers increase vulnerability. Integrating social vulnerability in global‐scale (flood) risk assessments can help inform global policy frameworks that aim to reduce risks posed by natural hazards and climate change as well as to foster more equitable development globally. Key Points We develop a global map of social vulnerability at ∼1 km spatial resolution based on five key vulnerability drivers (“GlobE‐SoVI”) We establish vulnerability drivers empirically based on their contribution to predicting fatalities caused by past flooding events Accounting for social vulnerability in global‐scale (flood) risk assessments can inform global policy frameworks that aim to reduce risk
Typologies of Service Supply Chain Resilience: A Multidimensional Analysis from China’s Regional Economies
This study investigates the resilience of service supply chains and its role in promoting sustainable regional development in China. Based on data from 31 provinces between 2017 and 2023, we develop a multidimensional evaluation framework grounded in the structure, relationship, and subject model. Using entropy weighting, fuzzy-set qualitative comparative analysis, necessary condition analysis, and OLS regression, we identify three dominant configurations: cost-adaptive, cost-growth, and technology-driven. Among them, the cost-adaptive path remains statistically significant when tested against updated 2023 data. The findings reveal persistent regional disparities and demonstrate how context-specific strategies can shape service resilience under institutional and market variations. This study contributes to supply chain sustainability research by integrating recent empirical evidence with configurational analysis, offering practical policy insights for balancing efficiency, adaptability, and inclusive development.
An exploratory study for software change prediction in object-oriented systems using hybridized techniques
Variation in software requirements, technological upgrade and occurrence of defects necessitate change in software for its effective use. Early detection of those classes of a software which are prone to change is critical for software developers and project managers as it can aid in efficient resource allocation of limited resources. Moreover, change prone classes should be efficiently restructured and designed to prevent introduction of defects. Recently, use of search based techniques and their hybridized counter-parts have been advocated in the field of software engineering predictive modeling as these techniques help in identification of optimal solutions for a specific problem by testing the goodness of a number of possible solutions. In this paper, we propose a novel approach for change prediction using search-based techniques and hybridized techniques. Further, we address the following issues: (i) low repeatability of empirical studies, (ii) less use of statistical tests for comparing the effectiveness of models, and (iii) non-assessment of trade-off between runtime and predictive performance of various techniques. This paper presents an empirical validation of search-based techniques and their hybridized versions, which yields unbiased, accurate and repeatable results. The study analyzes and compares the predictive performance of five search-based, five hybridized techniques and four widely used machine learning techniques and a statistical technique for predicting change prone classes in six application packages of a popular operating system for mobile—Android. The results of the study advocate the use of hybridized techniques for developing models to identify change prone classes.
Understanding pandemic behaviours in Singapore–Application of the Terror Management Health Model
The novel coronavirus, now known as COVID-19, was first reported in China in December 2019 and became a global crisis by March 2020. Both adaptive and maladaptive behaviours were observed in response to aspects of the crisis, some of which appeared to be contradictory to coping and curbing the threat of COVID-19. For instance, the purchase and use of surgical masks and sanitisers could be understood as logical health-oriented behaviours relevant to coping with the COVID-19 pandemic. The breaching of social distancing measures and forwarding unverified news, however, might have done more harm than good. In applying the proximal and distal defences proposed within the Terror Management Health Model (TMHM), this article suggests explanations for these behaviours as individuals' attempts to alleviate anxiety arising from reminders of their mortality. Information from local newspapers and media is used to highlight and identify common behaviours observed in the pandemic, and the TMHM is applied to explain these behaviours. This paper briefly concludes with a call for empirical validation of the TMHM for the behaviours observed in relation to COVID-19, and for the use of TMHM conceptualisations to develop countermeasures to reduce maladaptive behaviours in the current, and future, pandemics in Singapore. Keywords: TMHM, COVID-19, health behaviours, Singapore, empirical validation
AI-driven value management in construction: a theoretically-grounded framework with empirical validation
Value Management (VM) of construction projects is beset by inherent pitfalls of expertise-dependence, fixed processes, and segregation from data-rich environments. The following paper presents and evaluates an artificial intelligence-facilitated Value Management System (AIVMS) that incorporates predictive analytics, Multi-Criteria Decision-Making (MCDM), and Explainable AI (XAI) to facilitate open, fact-based stakeholder-centric decisions throughout project life cycles. It was designed using the Design Science Research approach on systematic literature review of 127 peer-reviewed papers and was validated with three-round Delphi study with 24 construction professionals. The AIVMS system is six-layered and consists of: intelligent value driver identification, predictive analytics engine, dynamic MCDM engine, integration and optimization core, explainable AI interface, and adaptive learning system. Empirical validation through three real-world project case studies revealed significant improvements: 23% increase in decision-making consistency, 31% reduction in value engineering cycle time, and 89% improvement in stakeholder satisfaction with transparency of decisions. The framework achieved 91.2% precision for forecasting a variety of performance measures and enabled the identification of €2.8 M average cost optimization potential. This research is the first empirically-validated integration of AI, MCDM, and XAI for construction value management that integrates machine-based intelligence with man-centric transparency requirements and provides real-world implementation avenues for existing BIM and project management systems.