Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,491 result(s) for "Research, Industrial Statistical methods."
Sort by:
Design of experiments for engineers and scientists
The tools and technique used in the Design of Experiments (DOE) have been proved successful in meeting the challenge of continuous improvement over the last 15 years. However, research has shown that applications of these techniques in small and medium-sized manufacturing companies are limited due to a lack of statistical knowledge required for their effective implementation. Although many books have been written in this subject, they are mainly by statisticians, for statisticians and not appropriate for engineers.Design of Experiments for Engineers and Scientists overcomes the problem of statistics by taking a unique approach using graphical tools. The same outcomes and conclusions are reached as by those using statistical methods and readers will find the concepts in this book both familiar and easy to understand. The book treats Planning, Communication, Engineering, Teamwork and Statistical Skills in separate chapters and then combines these skills through the use of many industrial case studies. Design of Experiments forms part of the suite of tools used in Six Sigma.Key features:* Provides essential DOE techniques for process improvement initiatives* Introduces simple graphical techniques as an alternative to advanced statistical methods - reducing time taken to design and develop prototypes, reducing time to reach the market* Case studies place DOE techniques in the context of different industry sectors* An excellent resource for the Six Sigma training programThis book will be useful to engineers and scientists from all disciplines tackling all kinds of manufacturing, product and process quality problems and will be an ideal resource for students of this topic.Dr Jiju Anthony is Senior Teaching Fellow at the International Manufacturing Unit at Warwick University. He is also a trainer and consultant in DOE and has worked as such for a number of companies including Motorola, Vickers, Procter and Gamble, Nokia, Bosch and a large number of SMEs.
Statistical software for analyzing the health effects of multiple concurrent exposures via Bayesian kernel machine regression
Background Estimating the health effects of multi-pollutant mixtures is of increasing interest in environmental epidemiology. Recently, a new approach for estimating the health effects of mixtures, Bayesian kernel machine regression (BKMR), has been developed. This method estimates the multivariable exposure-response function in a flexible and parsimonious way, conducts variable selection on the (potentially high-dimensional) vector of exposures, and allows for a grouped variable selection approach that can accommodate highly correlated exposures. However, the application of this novel method has been limited by a lack of available software, the need to derive interpretable output in a computationally efficient manner, and the inability to apply the method to non-continuous outcome variables. Methods This paper addresses these limitations by (i) introducing an open-source software package in the R programming language, the bkmr R package, (ii) demonstrating methods for visualizing high-dimensional exposure-response functions, and for estimating scientifically relevant summaries, (iii) illustrating a probit regression implementation of BKMR for binary outcomes, and (iv) describing a fast version of BKMR that utilizes a Gaussian predictive process approach. All of the methods are illustrated using fully reproducible examples with the provided R code. Results Applying the methods to a continuous outcome example illustrated the ability of the BKMR implementation to estimate the health effects of multi-pollutant mixtures in the context of a highly nonlinear, biologically-based dose-response function, and to estimate overall, single-exposure, and interactive health effects. The Gaussian predictive process method led to a substantial reduction in the runtime, without a major decrease in accuracy. In the setting of a larger number of exposures and a dichotomous outcome, the probit BKMR implementation was able to correctly identify the variables included in the exposure-response function and yielded interpretable quantities on the scale of a latent continuous outcome or on the scale of the outcome probability. Conclusions This newly developed software, integrated suite of tools, and extended methodology makes BKMR accessible for use across a broad range of epidemiological applications in which multiple risk factors have complex effects on health.
Novel keyword co-occurrence network-based methods to foster systematic reviews of scientific literature
Systematic reviews of scientific literature are important for mapping the existing state of research and highlighting further growth channels in a field of study, but systematic reviews are inherently tedious, time consuming, and manual in nature. In recent years, keyword co-occurrence networks (KCNs) are exploited for knowledge mapping. In a KCN, each keyword is represented as a node and each co-occurrence of a pair of words is represented as a link. The number of times that a pair of words co-occurs in multiple articles constitutes the weight of the link connecting the pair. The network constructed in this manner represents cumulative knowledge of a domain and helps to uncover meaningful knowledge components and insights based on the patterns and strength of links between keywords that appear in the literature. In this work, we propose a KCN-based approach that can be implemented prior to undertaking a systematic review to guide and accelerate the review process. The novelty of this method lies in the new metrics used for statistical analysis of a KCN that differ from those typically used for KCN analysis. The approach is demonstrated through its application to nano-related Environmental, Health, and Safety (EHS) risk literature. The KCN approach identified the knowledge components, knowledge structure, and research trends that match with those discovered through a traditional systematic review of the nanoEHS field. Because KCN-based analyses can be conducted more quickly to explore a vast amount of literature, this method can provide a knowledge map and insights prior to undertaking a rigorous traditional systematic review. This two-step approach can significantly reduce the effort and time required for a traditional systematic literature review. The proposed KCN-based pre-systematic review method is universal. It can be applied to any scientific field of study to prepare a knowledge map.
A novel framework for increasing research transparency: Exploring the connection between diversity and innovation
A split sample/dual method research protocol is demonstrated to increase transparency while reducing the probability of false discovery. We apply the protocol to examine whether diversity in ownership teams increases or decreases the likelihood of a firm reporting a novel innovation using data from the 2018 United States Census Bureau’s Annual Business Survey. Transparency is increased in three ways: 1) all specification testing and identifying potentially productive models is done in an exploratory subsample that 2) preserves the validity of hypothesis test statistics from de novo estimation in the holdout confirmatory sample with 3) all findings publicly documented in an earlier registered report and in this journal publication. Bayesian estimation procedures that leverage information from the exploratory stage included in the confirmatory stage estimation replace traditional frequentist null hypothesis significance testing. In addition to increasing statistical power by using information from the full sample, Bayesian methods directly estimate a probability distribution for the magnitude of an effect, allowing much richer inference. Estimated magnitudes of diversity along academic discipline, race, ethnicity, and foreign-born status dimensions are positively associated with innovation. A maximally diverse ownership team on these dimensions would be roughly six times more likely to report new-to-market innovation than a homophilic team.
A parametric bootstrap control chart for Lindley Geometric percentiles
Control charts are vital for quality control and process monitoring, helping businesses identify variations in production. Traditional control charts, like Shewhart charts, may not work well for skewed distributions, such as the Lindley geometric distribution (LG). This study introduces a new control chart that uses parametric bootstrap techniques to monitor percentiles of the LG distribution, providing a more effective quality control method. The LG distribution is useful for modeling material strength and failures, especially in structural design, where lower percentiles indicate reduced tensile strength. We conducted extensive simulations to assess the proposed control chart’s effectiveness, considering various distribution parameters, percentile values, Type I error rates, and sample sizes. Our findings highlight how subgroup size, percentiles, and significance levels affect control limits, stressing the need for careful parameter selection in monitoring processes. The results show that the new control chart is highly sensitive to changes in LG distribution parameters and performs consistently across different percentiles. This suggests its practical relevance and robustness for industrial applications in quality control. Future research should explore its performance in real-world production settings to confirm its efficiency and reliability.
Innovating under different competitive strategies: The impact of R&D on risk and return in dynamic environments
The prevailing narrative in the management literature views R&D as a high-risk, high-return activity. Although firms with varying risk-return preferences pursue R&D, this conventional perspective continues to influence decision-making in both corporate strategy and economic policy. This paper questions the narrative by using a novel statistical framework that accounts for competitive strategy and environmental turbulences. Drawing on firm innovation data from the Community Innovation Survey (CIS), we apply semiparametric regression for location and scale to model both the mean and the variance of turnover growth as a function of the interaction between R&D intensity and environmental turbulence, across four common competitive strategy regimes. The findings reveal that for firms prioritizing price leadership across a broad product range, R&D is associated with reduced risk and minimal impact on average growth. Only for firms specifically focused on high quality or small product ranges, the results align with prior research, confirming the expected high-risk, high-return relationship associated with R&D.
Nexus among intellectual capital, interorganizational learning, industrial Internet of things technology and innovation performance: a resource-based perspective
PurposeThe authors observe the influence of intellectual capital (IC) on innovation performance with the mediating role of interorganizational learning (IOL) in the Pakistani automotive industry. Besides, industrial Internet of things (IoT) technology is used as moderating variables between IOL and innovation performance.Design/methodology/approachStructural equation modeling (SEM) presents scholars with extra flexibility and enhanced research conclusions. SEM is described as a statistical methodology and the best tool used for hypothesis testing. The authors used partial least squares SEM for testing hypotheses. The simple random sampling technique followed to collect data from respondents, and 492 questionnaires were used for analysis.FindingsThe outcomes reveal that IC enhances innovation performance and IOL. Moreover, IOL increases innovation performance. IOL significantly mediates between IC and innovation performance. Industrial IoT technology improves innovation performance. Finally, industrial IoT technology strengthens the positive association between IOL and innovation performance.Practical implicationsThis study concentrates on the issue of how managers use IOL and industrial IoT technology to take higher advantage of IC that increases innovation performance.Originality/valueThis is the initial study that builds a theoretical framework to integrate IC, IOL, industrial IoT technology and innovation performance. Although prior researchers observe the association between IC and innovation performance, less concentration was paid to understand the role of interorganizational leadership and industrial IoT technology in leveraging organizational IC.
Problems with the concept of gut microbiota dysbiosis
Summary The human microbiome research is with the notable exception of fecal transplantation still mostly in a descriptive phase. Part of the difficulty for translating research into medical interventions is due to the large compositional complexity of the microbiome resulting in datasets that need sophisticated statistical methods for their analysis and do not lend to industrial applications. Another part of the difficulty might be due to logical flaws in terminology particularly concerning ‘dysbiosis’ that avoids circular conclusions and is based on sound ecological and evolutionary reasoning. Many case–control studies are underpowered necessitating more meta‐analyses that sort out consistent from spurious dysbiosis–disease associations. We also need for the microbiome a transition from statistical associations to causal relationships with diseases that fulfil a set of modified Koch's postulates for commensals. Disturbingly, the most sophisticated statistical analyses explain only a small percentage of the variance in the microbiome. Microbe–microbe interactions irrelevant to the host and stochastic processes might play a greater role than anticipated. To satisfy the concept of Karl Popper about conjectures and refutations in the scientific process, we should also conduct more experiments that try to refute the role of the commensal gut microbiota for human health and disease. Microbiome research has with the exception of fecal transplantation not yet reached the level of industrial applications with respect to gut microbiome engineering. Part of the difficulty might be the of the system, but some problems might reflect imprecise definitions like that for dysbiosis. Refutations of concepts should be part of the scientific process.
A statistical insight to exploration of medicinal wastewater as a source of thermostable lipase-producing microorganisms
Bacteria are ubiquitous and capable of thriving in diverse environments, including industrial effluents, which often present harsh physical and chemical conditions. These microorganisms produce various intracellular and extracellular biomolecules that enable adaptation, tolerance, and utilization of such extreme environments. Recognizing the growing industrial demand for thermostable lipases, this study focuses on the isolation, characterization, and optimization of lipase-producing bacteria from medicinal wastewater collected from a factory in North 24 Parganas, Kolkata, West Bengal, India. Nineteen lipase-producing bacterial isolates were obtained from nutrient agar plates and screened using tributyrin agar (TBA) plates. Extracellular lipolytic activity was confirmed via the cup-plate method with Tween 20/80 agar and methyl red as the indicator. The isolates were characterized morphologically and through biochemical tests. Extracellular lipase activity was quantified spectrophotometrically using para-nitrophenyl palmitate (pNPP) as a substrate in 50 mM Tris-HCl buffer, with absorbance measured at 410 nm after incubation at 65°C for 20 minutes to assess thermostability. Of the 19 isolates, 11 produced thermolabile lipases, while 8 exhibited thermostable lipase activity. Among these, three isolates (MWS14, MWS6, and MWS18) demonstrated high thermostable lipase production, with MWS18 being the most productive. Ribotyping and BLAST analysis revealed that these isolates shared 99% sequence similarity with Enterococcus, Bacillus, and Serratia species, respectively. Statistical analysis using the Kruskal-Wallis H-test confirmed significant differences in lipase production among the three groups of isolates. The study also predicts greater lipase production potential in Gram-negative bacterial strains compared to Gram-positive isolates. These findings highlight the industrial relevance of medicinal wastewater as a source of thermostable lipase-producing bacteria.