نتائج البحث

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
تم إضافة الكتاب إلى الرف الخاص بك!
عرض الكتب الموجودة على الرف الخاص بك .
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إضافة العنوان إلى الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
هل أنت متأكد أنك تريد إزالة الكتاب من الرف؟
{{itemTitle}}
{{itemTitle}}
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إزالة العنوان من الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
    منجز
    مرشحات
    إعادة تعيين
  • الضبط
      الضبط
      امسح الكل
      الضبط
  • مُحَكَّمة
      مُحَكَّمة
      امسح الكل
      مُحَكَّمة
  • السلسلة
      السلسلة
      امسح الكل
      السلسلة
  • مستوى القراءة
      مستوى القراءة
      امسح الكل
      مستوى القراءة
  • السنة
      السنة
      امسح الكل
      من:
      -
      إلى:
  • المزيد من المرشحات
      المزيد من المرشحات
      امسح الكل
      المزيد من المرشحات
      نوع المحتوى
    • نوع العنصر
    • لديه النص الكامل
    • الموضوع
    • الناشر
    • المصدر
    • المتبرع
    • اللغة
    • مكان النشر
    • المؤلفين
    • موقع
306 نتائج ل "Economics Simulation methods Computer programs."
صنف حسب:
Simulating distributional impacts of macro-dynamics : theory and practical applications
\"Simulating Distributional Impacts of Macro-dynamics: Theory and Practical Applications is a comprehensive guide for analyzing and understanding the effects of macroeconomic shocks on income and consumption distribution, as well as for using the ADePT Simulation Module. Since real-time micro data is rarely available, the Simulation Module (part of the ADePT economic analysis software) takes advantage of historical household surveys to estimate how current or proposed macro changes might impact household and individual welfare\"--Back cover.
Simulating distributional impacts of macro-dynamics
The automated DEC poverty tables (ADePT) simulation module, one of several modules in the ADePT platform, offers a useful methodological framework for analysts interested in measuring how macroeconomic projections may affect households. The modules approach falls between simple extrapolation and the most sophisticated methods such as top-down or top-down-up models based on linking household data with computable general equilibrium (CGE) models. By using simple macroeconomic projections as the macro-linkages to a micro-behavioral model built from household data, the model captures the complexities that influence how macro impacts are transmitted to households. The ADePT simulation module is an improvement over existing approaches because with minimal data and computational requirements it can evaluate in advance the distributional impacts of macroeconomic projections. By focusing on adjustments in employment and earnings, non-labor income, and price changes, it accounts for multiple transmission mechanisms and captures micro-level impacts across the entire income distribution. Using existing macroeconomic data and household surveys, the ADePT simulation module helps in identifying and profiling those groups of individuals - defined by characteristics such as occupational sector, location, and education level who are most likely to suffer income losses as a consequence of the change. This manual is organized in two parts. Part one covers the motivation, overview, and illustrations of the method. Part two describes each step the user must follow to create or obtain proper macro- and microeconomic inputs required for the simulation. It also explains how to enter these inputs into the module and the different options available for tailoring simulations.
How to treat uncertainties in life cycle assessment studies?
PurposeThe use of life cycle assessment (LCA) as a decision support tool can be hampered by the numerous uncertainties embedded in the calculation. The treatment of uncertainty is necessary to increase the reliability and credibility of LCA results. The objective is to provide an overview of the methods to identify, characterize, propagate (uncertainty analysis), understand the effects (sensitivity analysis), and communicate uncertainty in order to propose recommendations to a broad public of LCA practitioners.MethodsThis work was carried out via a literature review and an analysis of LCA tool functionalities. In order to facilitate the identification of uncertainty, its location within an LCA model was distinguished between quantity (any numerical data), model structure (relationships structure), and context (criteria chosen within the goal and scope of the study). The methods for uncertainty characterization, uncertainty analysis, and sensitivity analysis were classified according to the information provided, their implementation in LCA software, the time and effort required to apply them, and their reliability and validity. This review led to the definition of recommendations on three levels: basic (low efforts with LCA software), intermediate (significant efforts with LCA software), and advanced (significant efforts with non-LCA software).Results and discussionFor the basic recommendations, minimum and maximum values (quantity uncertainty) and alternative scenarios (model structure/context uncertainty) are defined for critical elements in order to estimate the range of results. Result sensitivity is analyzed via one-at-a-time variations (with realistic ranges of quantities) and scenario analyses. Uncertainty should be discussed at least qualitatively in a dedicated paragraph. For the intermediate level, the characterization can be refined with probability distributions and an expert review for scenario definition. Uncertainty analysis can then be performed with the Monte Carlo method for the different scenarios. Quantitative information should appear in inventory tables and result figures. Finally, advanced practitioners can screen uncertainty sources more exhaustively, include correlations, estimate model error with validation data, and perform Latin hypercube sampling and global sensitivity analysis.ConclusionsThrough this pedagogic review of the methods and practical recommendations, the authors aim to increase the knowledge of LCA practitioners related to uncertainty and facilitate the application of treatment techniques. To continue in this direction, further research questions should be investigated (e.g., on the implementation of fuzzy logic and model uncertainty characterization) and the developers of databases, LCIA methods, and software tools should invest efforts in better implementing and treating uncertainty in LCA.
Comparing implementations of global and local indicators of spatial association
Functions to calculate measures of spatial association, especially measures of spatial autocorrelation, have been made available in many software applications. Measures may be global, applying to the whole data set under consideration, or local, applying to each observation in the data set. Methods of statistical inference may also be provided, but these will, like the measures themselves, depend on the support of the observations, chosen assumptions, and the way in which spatial association is represented; spatial weights are often used as a representational technique. In addition, assumptions may be made about the underlying mean model, and about error distributions. Different software implementations may choose to expose these choices to the analyst, but the sets of choices available may vary between these implementations, as may default settings. This comparison will consider the implementations of global Moran’s I, Getis–Ord G and Geary’s C, local \\[I_i\\] and \\[G_i\\], available in a range of software including Crimestat, GeoDa, ArcGIS, PySAL and R contributed packages.
1001 Ways to run AutoDock Vina for virtual screening
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
Cost-effectiveness of financial incentives for improving diet and health through Medicare and Medicaid: A microsimulation study
Economic incentives through health insurance may promote healthier behaviors. Little is known about health and economic impacts of incentivizing diet, a leading risk factor for diabetes and cardiovascular disease (CVD), through Medicare and Medicaid. A validated microsimulation model (CVD-PREDICT) estimated CVD and diabetes cases prevented, quality-adjusted life years (QALYs), health-related costs (formal healthcare, informal healthcare, and lost-productivity costs), and incremental cost-effectiveness ratios (ICERs) of two policy scenarios for adults within Medicare and Medicaid, compared to a base case of no new intervention: (1) 30% subsidy on fruits and vegetables (\"F&V incentive\") and (2) 30% subsidy on broader healthful foods including F&V, whole grains, nuts/seeds, seafood, and plant oils (\"healthy food incentive\"). Inputs included national demographic and dietary data from the National Health and Nutrition Examination Survey (NHANES) 2009-2014, policy effects and diet-disease effects from meta-analyses, and policy and health-related costs from established sources. Overall, 82 million adults (35-80 years old) were on Medicare and/or Medicaid. The mean (SD) age was 68.1 (11.4) years, 56.2% were female, and 25.5% were non-whites. Health and cost impacts were simulated over the lifetime of current Medicare and Medicaid participants (average simulated years = 18.3 years). The F&V incentive was estimated to prevent 1.93 million CVD events, gain 4.64 million QALYs, and save $39.7 billion in formal healthcare costs. For the healthy food incentive, corresponding gains were 3.28 million CVD and 0.12 million diabetes cases prevented, 8.40 million QALYs gained, and $100.2 billion in formal healthcare costs saved, respectively. From a healthcare perspective, both scenarios were cost-effective at 5 years and beyond, with lifetime ICERs of $18,184/QALY (F&V incentive) and $13,194/QALY (healthy food incentive). From a societal perspective including informal healthcare costs and lost productivity, respective ICERs were $14,576/QALY and $9,497/QALY. Results were robust in probabilistic sensitivity analyses and a range of one-way sensitivity and subgroup analyses, including by different durations of the intervention (5, 10, and 20 years and lifetime), food subsidy levels (20%, 50%), insurance groups (Medicare, Medicaid, and dual-eligible), and beneficiary characteristics within each insurance group (age, race/ethnicity, education, income, and Supplemental Nutrition Assistant Program [SNAP] status). Simulation studies such as this one provide quantitative estimates of benefits and uncertainty but cannot directly prove health and economic impacts. Economic incentives for healthier foods through Medicare and Medicaid could generate substantial health gains and be highly cost-effective.
OpenCASA: A new open-source and scalable tool for sperm quality analysis
In the field of assisted reproductive techniques (ART), computer-assisted sperm analysis (CASA) systems have proved their utility and potential for assessing sperm quality, improving the prediction of the fertility potential of a seminal dose. Although most laboratories and scientific centers use commercial systems, in the recent years certain free and open-source alternatives have emerged that can reduce the costs that research groups have to face. However, these open-source alternatives cannot analyze sperm kinetic responses to different stimuli, such as chemotaxis, thermotaxis or rheotaxis. In addition, the programs released to date have not usually been designed to encourage the scalability and the continuity of software development. We have developed an open-source CASA software, called OpenCASA, which allows users to study three classical sperm quality parameters: motility, morphometry and membrane integrity (viability) and offers the possibility of analyzing the guided movement response of spermatozoa to different stimuli (useful for chemotaxis, thermotaxis or rheotaxis studies) or different motile cells such as bacteria, using a single software. This software has been released in a Version Control System at Github. This platform will allow researchers not only to download the software but also to be involved in and contribute to further developments. Additionally, a Google group has been created to allow the research community to interact and discuss OpenCASA. For validation of the OpenCASA software, we analysed different simulated sperm populations (for chemotaxis module) and evaluated 36 ejaculates obtained from 12 fertile rams using other sperm analysis systems (for motility, membrane integrity and morphology modules). The results were compared with those obtained by Open-CASA using the Pearson's correlation and Bland-Altman tests, obtaining a high level of correlation in all parameters and a good agreement between the different used methods and the OpenCASA. With this work, we propose an open-source project oriented to the development of a new software application for sperm quality analysis. This proposed software will use a minimally centralized infrastructure to allow the continued development of its modules by the research community.
Population Health Impact and Cost-Effectiveness of Community-Supported Agriculture Among Low-Income US Adults: A Microsimulation Analysis
Objectives. To estimate the population-level effectiveness and cost-effectiveness of a subsidized community-supported agriculture (CSA) intervention in the United States. Methods. In 2019, we developed a microsimulation model from nationally representative demographic, biomedical, and dietary data (National Health and Nutrition Examination Survey, 2013–2016) and a community-based randomized trial (conducted in Massachusetts from 2017 to 2018). We modeled 2 interventions: unconditional cash transfer ( $300/year) and subsidized CSA ($ 300/year subsidy). Results. The total discounted disability-adjusted life years (DALYs) accumulated over the life course to cardiovascular disease and diabetes complications would be reduced from 24 797 per 10 000 people (95% confidence interval [CI] = 24 584, 25 001) at baseline to 23 463 per 10 000 (95% CI = 23 241, 23 666) under the cash intervention and 22 304 per 10 000 (95% CI = 22 084, 22 510) under the CSA intervention. From a societal perspective and over a life-course time horizon, the interventions had negative incremental cost-effectiveness ratios, implying cost savings to society of – $191 100 per DALY averted (95% CI = –$ 191 767, – $188 919) for the cash intervention and –$ 93 182 per DALY averted (95% CI = – $93 707, –$ 92 503) for the CSA intervention. Conclusions. Both the cash transfer and subsidized CSA may be important public health interventions for low-income persons in the United States.
Strategic decision making in live streaming e-commerce through tripartite evolutionary game analysis
This article delves into the current popular phenomenon of live streaming e-commerce, with a specific focus on issues related to product quality and after-sales service. It constructs an evolutionary game model that encompasses three key stakeholders: e-commerce platforms, consumers, and streamers. The study conducts a thorough analysis of the interactions and strategic choices among these entities, investigating the stability of equilibrium strategy combinations within the game system and the influence of various factors on decision-making behaviors. Furthermore, the validity of the analytical conclusion is corroborated through the application of simulation analysis methods. The study finds that for the consumer, strategies such as reducing losses encountered due to quality issues under strict demands, enhancing compensation in these scenarios, and increasing benefits for maintaining stringent requirements during live streaming sessions can motivate them to adopt more stringent strategies. For the streamer, essential factors in promoting the selection of high-quality products include increasing the benefits associated with such choices and reducing the probability of quality issues, or alternatively, decreasing the gains from lower-quality selections and increasing the likelihood of encountering quality problems with these products. For the e-commerce platform, strategically adjusting the profit-sharing ratio to maintain collaborative momentum and influence the enthusiasm of both consumers and streamers is a critical strategy to avert market scenarios akin to prisoner’s dilemmas and tragic outcomes. Overall, this research offers profound insights into the complex strategic evolution within the live commerce market, providing valuable guidance for interaction strategies among e-commerce platforms, consumers, and streamers. Its implications for practical decision-making optimization and strategic formulation are of significant importance.
Microfounded Tax Revenue Forecast Model with Heterogeneous Population and Genetic Algorithm Approach
The ability of governments to accurately forecast tax revenues is essential for the successful implementation of fiscal programs. However, forecasting state government tax revenues using only aggregate economic variables is subject to Lucas’s critique, which is left not fully answered as classical methods do not consider the complex feedback dynamics between heterogeneous consumers, businesses, and the government. In this study we present an agent-based model with a heterogeneous population and genetic algorithm-based decision-making to model and simulate an economy with taxation policy dynamics. The model focuses on assessing state tax revenues obtained from regions or cities within countries while introducing consumers and businesses, each with unique attributes and a decision-making mechanism driven by an adaptive genetic algorithm. We demonstrate the efficacy of the proposed method on a small village, resulting in a mean relative error of 5.44 % ± 2.45 % from the recorded taxes over 4 years and 4.08 % ± 1.21 for the following year’s assessment. Moreover, we demonstrate the model’s ability to evaluate the effect of different taxation policies on economic activity and tax revenues.