Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
4 result(s) for "recommendation set size"
Sort by:
Web Personalization as a Persuasion Strategy: An Elaboration Likelihood Model Perspective
With advances in tracking and database technologies, firms are increasingly able to understand their customers and translate this understanding into products and services that appeal to them. Technologies such as collaborative filtering, data mining, and click-stream analysis enable firms to customize their offerings at the individual level. While there has been a lot of hype about web personalization recently, our understanding of its effectiveness is far from conclusive. Drawing on the elaboration likelihood model (ELM) literature, this research takes the view that the interaction between a firm and its customers is one of communicating a persuasive message to the customers driven by business objectives. In particular, we examine three major elements of a web personalization strategy: level of preference matching, recommendation set size, and sorting cue. These elements can be manipulated by a firm in implementing its personalization strategy. This research also investigates a personal disposition, need for cognition, which plays a role in assessing the effectiveness of web personalization. Research hypotheses are tested using 1,000 subjects in three field experiments based on a ring-tone download website. Our findings indicate the saliency of these variables in different stages of the persuasion process. Theoretical and practical implications of the findings are discussed.
Consequences of multiple imputation of missing standard deviations and sample sizes in meta‐analysis
Meta‐analyses often encounter studies with incompletely reported variance measures (e.g., standard deviation values) or sample sizes, both needed to conduct weighted meta‐analyses. Here, we first present a systematic literature survey on the frequency and treatment of missing data in published ecological meta‐analyses showing that the majority of meta‐analyses encountered incompletely reported studies. We then simulated meta‐analysis data sets to investigate the performance of 14 options to treat or impute missing SDs and/or SSs. Performance was thereby assessed using results from fully informed weighted analyses on (hypothetically) complete data sets. We show that the omission of incompletely reported studies is not a viable solution. Unweighted and sample size‐based variance approximation can yield unbiased grand means if effect sizes are independent of their corresponding SDs and SSs. The performance of different imputation methods depends on the structure of the meta‐analysis data set, especially in the case of correlated effect sizes and standard deviations or sample sizes. In a best‐case scenario, which assumes that SDs and/or SSs are both missing at random and are unrelated to effect sizes, our simulations show that the imputation of up to 90% of missing data still yields grand means and confidence intervals that are similar to those obtained with fully informed weighted analyses. We conclude that multiple imputation of missing variance measures and sample sizes could help overcome the problem of incompletely reported primary studies, not only in the field of ecological meta‐analyses. Still, caution must be exercised in consideration of potential correlations and pattern of missingness. Meta‐analyses often encounter studies with incompletely reported variance measures (e.g., standard deviation values) or sample sizes, both needed to conduct weighted meta‐analyses. We present a systematic literature survey on the frequency and treatment of missing data in published ecological meta‐analyses. Simulating the effect of 14 different options to treat missing data in meta‐analysis, we show that multiple imputation of missing variance measures and sample sizes could help overcome the problem of incompletely reported primary studies.
Capitalizing on the demographic transition : tackling noncommunicable diseases in South Asia
This book looks primarily at Cardio Vascular Disease (CVD) and tobacco use because they account for a disproportionate amount of the Non Communicable Disease (NCD) burden the focus is strategic rather than comprehensive. It considers both country and regional level approaches for tackling NCDs, as many of the issues and challenges of mounting an effective response are common to most South Asian countries. The prevention and control of NCDs constitute a development issue that low-income countries in South Asia are already facing. Both country and regional-level strategies are important because many of the issues and challenges of mounting an effective response to NCDs are common to most South Asian countries, even though their disease burden profiles vary. Hence, the rationale for this book is that strategic decisions for prevention and treatment of NCDs can effectively address the future burden of disease, promote healthy aging, and increase the potential benefit from the demographic transition, thus contributing to economic development. This book's goal is to encourage countries to develop, adopt, and implement effective and timely country and regional responses that reduce the population-level risk factors and NCD disease burden.
Optimal Selection of Ingot Sizes Via Set Covering
In 1984, Bethlehem Steel Corporation installed a new ingot mold stripping facility at its Bethlehem Plant that is capable of handling taller ingots. In order to take maximum advantage of this new facility, we developed a two-phase, computer-based procedure for selecting optimal ingot and internal ingot mold dimensions. Phase I of this procedure generates feasible ingot and internal ingot mold dimensions consistent with both the new stripper's capability and with mill constraints. Phase II then uses a set covering approach to select the optimal ingot and internal ingot mold sizes from among the feasible sizes generated. After analyzing the model, we recommended six new rectangular mold sizes to replace seven existing sizes. To date, implementation of these new ingot and mold sizes is proceeding smoothly and realizing the projected cost reduction benefits.