Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
103 result(s) for "Data explanatory analysis"
Sort by:
Impact of COVID-19 on Urban Mobility and Parking Demand Distribution: A Global Review with Case Study in Melbourne, Australia
The tremendous impact of the novel coronavirus (COVID-19) on societal, political, and economic rhythms has given rise to a significant overall shift from pre- to post-pandemic policies. Restrictions, stay-at-home regulations, and lockdowns have directly influenced day-to-day urban transportation flow. The rise of door-to-door services and the demand for visiting medical facilities, grocery stores, and restaurants has had a significant impact on urban transportation modal demand, further impacting zonal parking demand distribution. This study reviews the overall impacts of the pandemic on urban transportation with respect to a variety of policy changes in different cities. The parking demand shift was investigated by exploring the during- and post-COVID-19 parking policies of distinct metropolises. The detailed data related to Melbourne city parking, generated by the Internet of things (IoT), such as sensors and devices, are examined. Empirical data from 2019 (16 March to 26 May) and 2020 (16 March to 26 May) are explored in-depth using explanatory data analysis to demonstrate the demand and average parking duration shifts from district to district. The results show that the experimental zones of Docklands, Queensbery, Southbanks, Titles, and Princess Theatre areas have experienced a decrease in percentage change of vehicle presence of 29.2%, 36.3%, 37.7%, 23.7% and 40.9%, respectively. Furthermore, on-street level analysis of Princess Theatre zone, Lonsdale Street, Exhibition Street, Spring Street, and Little Bourke Street parking bays indicated a decrease in percentage change of vehicle presence of 38.7%, 56.4%, 12.6%, and 35.1%, respectively. In conclusion, future potential policymaking frameworks are discussed that could provide further guidance in stipulating epidemic prevention and control policies, particularly in relation to parking regulations during the pandemic.
Prediction of Stress Level on Indian Working Professionals Using Machine Learning
Stress levels amongst the Indian employees have increased due to a variety of factors and are a matter of great concern for the organizations. This study is based on Indian working professionals and real data has been collected by using non-probability convenience sampling. A questionnaire was drafted based on eighteen factors affecting the mental health of professionals. This study addresses two dimensions, first is to identify the important influential features that trigger stress in the lives of working professionals, and the second is to predict the stress levels. Various supervised machine learning algorithms have been experimented with and of all these algorithms, the Support Vector Machine Regressor model showed the best performance. The main contribution of the paper lies in the identification and ranking of ten important stress triggering features, that can guide organizations to develop policies to take care of their employees. The other deliverable is the development of a GUI-based stress prediction software based on Machine learning techniques.
Quantification of collective behaviour via causality analysis
Terms such as leader , mediator , and follower sound equal in the description of a pack of wolves, a street protest crowd, or a business team and have very similar meanings. This indicates the presence of some general law or structure that governs collective behaviour. To reveal this, we selected the most common parameter for most levels of the organisation—motion. A causality analysis of distance correlations was performed to obtain follow-up networks that show who follows whom and how strongly. These networks characterise an observed system in general and work as an automation bridge between the biological experiment and the broad field of network analysis. The proposed method was tested on 3D image data from a controlled experiment on a 6-member school of aquarium fish of Tiger Barb. The network patterns can be easily ethologically interpreted and agreed with expected behaviour.
Spatial Interpolation for the Distribution of Groundwater Level in an Area of Complex Geology Using Widely Available GIS Tools
The present study is an attempt to implement several spatial interpolation methods for the distribution of groundwater level in a wider area with multiple aquifers having variable hydraulic characteristics. Moreover, the goal of this study is to compare the results of these methods and check their accuracy and reliability, considering mainly the physical meaning of the outcome. Finally, we try to figure out which of these methods manage to identify hydrogeological features like groundwater divides, hydraulic conductivity barriers and no flow boundaries, and to highlight the hydraulic relationship between aquifers. Exploratory Spatial Data Analysis proved to be a necessary step prior to the implementation of spatial interpolation methods, since normalization of datasets, removal of general trends and data declustering was necessary for the proper implementation of geostatistical methods and reduction of the uncertainty of the results. Inverse Distance Weight, Radial basis functions, simple Kriging and Cokriging methods were implemented. None of the implemented methods produced results that were totally unreliable or erroneous and each method added pieces of information that were useful for the deeper understanding of the hydrogeological processes in the study area. The most appropriate spatial interpolation method for generating a groundwater level distribution surface, in an area with multiple aquifers and significant heterogeneity in hydraulic properties proved to be the Ordinary Cokriging method with altitude as a second parameter, which was highly correlated to groundwater level values in the study area. Cokriging method succeeds to accurately represent both the local variations within the individual aquifers and also to highlight the hydraulic relationships between them. Highlights • All spatial interpolation methods produced realistic surfaces. • Geostatistical methods produced smoother surfaces and lower hydraulic gradients. • Applying block declustering to sample data significantly reduces prediction uncertainty. • The most appropriate method for groundwater level distribution is Ordinary Cokriging.
Impacts of Geographical Location and Construction Type on As-Built Roughness in Highway Pavement Construction
This paper investigates the impacts of geographic location (urban and rural) and construction type (reconstruction, resurfacing and replacement) on as-built roughness using explanatory data analysis (EDA) and panel data analysis (PDA). Sets of roughness measurements, International Roughness Index (IRI), and other data for reconstructed, replaced, and resurfaced pavement projects in Wisconsin, U.S.A. from 1998 and 2000 were used for the analysis. This research developed the panel fixed effect model to define significant factors and quantify the interactive impacts of geographic location and construction type. The results show that geographic location is strongly significant, but construction type is not in both EDA and PDA. The likelihood ratio test (LRT) and other empirical analysis also supports that the geographic location is strongly significant factor affecting as-built IRI. The overall results quantify the interactive changes of IRI are 0.129 and 0.178 by PDA and EDA, respectively.
Intention to treat and per protocol analyses: differences and similarities
Randomized trials can take more explanatory or more pragmatic approaches. Pragmatic studies, conducted closer to real-world conditions, assess treatment effectiveness while considering factors like protocol adherence. In these studies, intention-to-treat (ITT) analysis is fundamental, comparing outcomes regardless of the actual treatment received. Explanatory trials, conducted closer to optimal conditions, evaluate treatment efficacy, commonly with a per protocol (PP) analysis, which includes only outcomes from adherent participants. ITT and PP are strategies used in the conception, design, conduct (protocol execution), analysis, and interpretation of trials. Each serves distinct objectives. While both can be valid, when bias is controlled, and complementary, each has its own limitations. By excluding nonadherent participants, PP analyses can lose the benefits of randomization, resulting in group differences in factors (influencing adherence and outcomes) that were present at baseline. Additionally, clinical and social factors affecting adherence can also operate during follow-up, that is, after randomization. Therefore, incomplete adherence may introduce postrandomization confounding. Conversely, ITT analysis, including all participants regardless of adherence, may dilute treatment effects. Moreover, varying adherence levels could limit the applicability of ITT findings in settings with diverse adherence patterns. Both ITT and PP analyses can be affected by selection bias due to differential losses and nonresponse (ie, missing data) during follow-up. Combining high-quality and comprehensive data with advanced statistical methods, known as g-methods, like inverse probability weighting, may help address postrandomization confounding in PP analysis as well as selection bias in both ITT and PP analyses.
The Inverse Gaussian Process as a Degradation Model
This article systematically investigates the inverse Gaussian (IG) process as an effective degradation model. The IG process is shown to be a limiting compound Poisson process, which gives it a meaningful physical interpretation for modeling degradation of products deteriorating in random environments. Treated as the first passage process of a Wiener process, the IG process is flexible in incorporating random effects and explanatory variables that account for heterogeneities commonly observed in degradation problems. This flexibility makes the class of IG process models much more attractive compared with the Gamma process, which has been thoroughly investigated in the literature of degradation modeling. The article also discusses statistical inference for three random effects models and model selection. It concludes with a real world example to demonstrate the applicability of the IG process in degradation analysis. Supplementary materials for this article are available online.
Real-world evidence: How pragmatic are randomized controlled trials labeled as pragmatic?
Introduction Pragmatic randomized controlled trials (RCTs) mimic usual clinical practice and they are critical to inform decision-making by patients, clinicians and policy-makers in real-world settings. Pragmatic RCTs assess effectiveness of available medicines, while explanatory RCTs assess efficacy of investigational medicines. Explanatory and pragmatic are the extremes of a continuum. This debate article seeks to evaluate and provide recommendation on how to characterize pragmatic RCTs in light of the current landscape of RCTs. It is supported by findings from a PubMed search conducted in August 2017, which retrieved 615 RCTs self-labeled in their titles as “pragmatic” or “naturalistic”. We focused on 89 of these trials that assessed medicines (drugs or biologics). Discussion 36% of these 89 trials were placebo-controlled, performed before licensing of the medicine, or done in a single-center. In our opinion, such RCTs overtly deviate from usual care and pragmatism. It follows, that the use of the term ‘pragmatic’ to describe them, conveys a misleading message to patients and clinicians. Furthermore, many other trials among the 615 coined as ‘pragmatic’ and assessing other types of intervention are plausibly not very pragmatic; however, this is impossible for a reader to tell without access to the full protocol and insider knowledge of the trial conduct. The degree of pragmatism should be evaluated by the trial investigators themselves using the PRECIS-2 tool, a tool that comprises 9 domains, each scored from 1 (very explanatory) to 5 (very pragmatic). Conclusions To allow for a more appropriate characterization of the degree of pragmatism in clinical research, submissions of RCTs to funders, research ethics committees and to peer-reviewed journals should include a PRECIS-2 tool assessment done by the trial investigators. Clarity and accuracy on the extent to which a RCT is pragmatic will help understand how much it is relevant to real-world practice.
Default avoidance on credit card portfolios using accounting, demographical and exploratory factors: decision making based on machine learning (ML) techniques
Effective and thorough credit-risk management is a key factor for lending institutions, as significant financial losses can arise from the borrowers’ default. Consequently, machine learning methods can measure and analyze credit risk objectively when at the same time they face increasingly attention. This study analyzes default payment data from a credit cards’ portfolio containing some 30,000 clients from Taiwan with twenty-three attributes and with no missing information. We compare prediction accuracy of seven classification methods used, i.e. KNN, Logistic Regression, Naïve Bayes, Decision Trees, Random Forest, SVC, and Linear SVC. The results indicate that only few out of most of the typical variables used can adequately analyze default characteristics in terms of lending decisions. The results provide effective feedback to credit evaluators, lending institutions and business analysts for in-depth analysis. Also, they mention to the importance of the precautionary borrowing techniques to be used to better understand credit-card borrowers’ behavior, along with specific accounting, historical and demographical characteristics.
Retrospective use of the Pragmatic-Explanatory Continuum Indicator Summary-2 trial design tool to assess design choices in randomized controlled trials: an empirical review
The Pragmatic-Explanatory Continuum Indicator Summary-2 (PRECIS-2) tool has been widely used to help investigators design randomized trials, facilitating the task of aligning design choices with an explanatory or pragmatic primary trial intention. PRECIS-2 is increasingly being used to retrospectively assess the degree of pragmatism or explanatoriness among published trials within reviews. There is little information on the interrater reliability of the tool and no consensus on the preferred method of achieving an accurate and reliable judgment of trial “pragmatism” when using PRECIS-2 retrospectively. The aims of this study were to assess the level of pragmatism or explanatoriness of trials that cite PRECIS-2 and to assess interrater reliability of PRECIS-2 using different scoring approaches. We compared agreement between two independent ratings within a single pair with agreement between consensus scores reached by two independent pairs of reviewers and whether widening the agreement criteria increased interrater reliability. Thirty randomized controlled trials (RCTs) were randomly selected from trials citing the PRECIS-2 tool. Two pairs of reviewers, a clinician paired with a methodologist in each case, were trained and independently scored each trial and reached a consensus score within pairs. Agreement between reviewers within pairs and between consensus scores across pairs was assessed using kappa statistics for each of the nine PRECIS-2 domains. RCTs citing PRECIS-2 had predominantly pragmatic design features. Interrater reliability within pairs was low across all domains, with the highest levels found in the two domains of analysis (0.32) and follow-up (0.33). Agreement across pairs on the consensus scores was similarly low. Agreement between reviewers and reviewer pairs was above 70% when agreement was reclassified as “within 1-point difference on the scoring scale” for eight domains, but no improvement was obtained for the remaining domain. Trials citing PRECIS-2 tend to have predominantly pragmatic design features. When using PRECIS-2 to retrospectively score trial publications, agreement between consensus scores across pairs of reviewers was no better than agreement within pairs. Reconfiguring the PRECIS scoring scale and improving scoring guidance may provide a more meaningful, easily interpreted measure of “pragmatism” for trialists wishing to use PRECIS-2 as a review tool. The Pragmatic-Explanatory Continuum Indicator Summary-2 (PRECIS-2) tool is designed to help researchers match their design decisions to the intended purpose of their trial. The intention of a trial can be “explanatory,” which improves our understanding of how an intervention works, or “pragmatic,” which supports decision-making in health care. Increasingly, the tool has been used for a secondary purpose: in systematic reviews. Here the tool is used to judge the level of “pragmatism” or “explanatoriness” of trials included in the review to aid the understanding of trial results. However, there is debate on the most reliable means of making this judgment. Sometimes judgements are made using one reviewer; other times, multiple reviewers. Our study evaluated interrater reliability of two methods of scoring trial publications using PRECIS-2: individual reviewer scores and pairs of reviewers agreeing on a consensus score. We also found that neither method we tested produced a reliable judgment using PRECIS-2, and the scores from two reviewers agreeing on a consensus were no more reliable than scores from a single reviewer. We performed an additional analysis that showed that simplifying the scoring from the original five-point scale to a three-point scale may give a more reliable judgment of the “pragmatism” or “explanatioriness” of published trials. This simpler method of scoring should be encouraged for retrospective use of PRECIS-2 in systematic reviews. •There is ongoing debate as to the most reliable method to assess “pragmatism” in published trials.•The reliability of PRECIS-2 in judging published trials is poor, regardless of number of reviewers•Reliability improves by moving to a 3-point scoring scale when scoring published trials