Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
41,109 result(s) for "Forecasting - methods"
Sort by:
Prediction machines : the simple economics of artificial intelligence
The idea of artificial intelligence--job-killing robots, self-driving cars, and self-managing organizations--captures the imagination, evoking a combination of wonder and dread for those of us who will have to deal with the consequences. But what if it's not quite so complicated? The real job of artificial intelligence, argue these three eminent economists, is to lower the cost of prediction. And once you start talking about costs, you can use some well-established economics to cut through the hype. The constant challenge for all managers is to make decisions under uncertainty. And AI contributes by making knowing what's coming in the future cheaper and more certain. But decision making has another component: judgment, which is firmly in the realm of humans, not machines. Making prediction cheaper means that we can make more predictions more accurately and assess them with our better (human) judgment. Once managers can separate tasks into components of prediction and judgment, we can begin to understand how to optimize the interface between humans and machines. More than just an account of AI's powerful capabilities, Prediction Machines shows managers how they can most effectively leverage AI, disrupting business as usual only where required, and provides businesses with a toolkit to navigate the coming wave of challenges and opportunities.-- Provided by publisher
Agricultural product price forecasting methods: research advances and trend
PurposeThe purpose of this paper is to provide reference for researchers by reviewing the research advances and trend of agricultural product price forecasting methods in recent years.Design/methodology/approachThis paper reviews the main research methods and their application of forecasting of agricultural product prices, summarizes the application examples of common forecasting methods, and prospects the future research directions.Findings1) It is the trend to use hybrid models to predict agricultural products prices in the future research; 2) the application of the prediction model based on price influencing factors should be further expanded in the future research; 3) the performance of the model should be evaluated based on DS rather than just error-based metrics in the future research; 4) seasonal adjustment models can be applied to the difficult seasonal forecasting tasks in the agriculture product prices in the future research; 5) hybrid optimization algorithm can be used to improve the prediction performance of the model in the future research.Originality/valueThe methods from this paper can provide reference for researchers, and the research trends proposed at the end of this paper can provide solutions or new research directions for relevant researchers.
Everything is predictable : how Bayes' remarkable theorem explains the world
Thomas Bayes was an eighteenth-century Presbyterian minister and amateur mathematician whose obscure life belied the profound impact of his work. Like most research into probability at the time, his theorem was mainly seen as relevant to games of chance, like dice and cards. But its implications soon became clear, affecting fields as diverse as medicine, law and artificial intelligence. Bayes' theorem helps explain why highly accurate screening tests can lead to false positives, causing unnecessary anxiety for patients. A failure to account for it in court has put innocent people in jail. But its influence goes far beyond practical applications. Fusing biography, razor-sharp science communication and intellectual history, 'Everything Is Predictable' is a captivating tour of Bayes' theorem and its impact on modern life.
“Sometimes I’m interested in seeing a fuller story to tell with numbers” Implementing a forecasting dashboard for harm reduction and overdose prevention: a qualitative assessment
Objectives The escalating overdose crisis in the United States points to the urgent need for new and novel data tools. Overdose data tools are growing in popularity but still face timely delays in surveillance data availability, lack of completeness, and wide variability in quality by region. As such, we need innovative tools to identify and prioritize emerging and high-need areas. Forecasting offers one such solution. Machine learning methods leverage numerous datasets that could be used to predict future vulnerability to overdose at the regional, town, and even neighborhood levels. This study aimed to understand the multi-level factors affecting the early stages of implementation for an overdose forecasting dashboard. This dashboard was developed with and for statewide harm reduction providers to increase data-driven response and resource distribution at the neighborhood level. Methods As part of PROVIDENT (Preventing OVerdose using Information and Data from the EnvironmeNT), a randomized, statewide community trial, we conducted an implementation study where we facilitated three focus groups with harm reduction organizations enrolled in the larger trial. Focus group participants held titles such as peer outreach workers, case managers, and program coordinators/managers. We employed the Exploration, Preparation, Implementation, Sustainment (EPIS) Framework to guide our analysis. This framework offers a multi-level, four-phase analysis unique to implementation within a human services environment to assess the exploration and preparation phases that influenced the early launch of the intervention. Results Multiple themes centering on organizational culture and resources emerged, including limited staff capacity for new interventions and repeated exposure to stress and trauma, which could limit intervention uptake. Community-level themes included the burden of data collection for program funding and statewide efforts to build stronger networks for data collection and dashboarding and data-driven resource allocation. Discussion Using an implementation framework within the larger study allowed us to identify multi-level and contextual factors affecting the early implementation of a forecasting dashboard within the PROVIDENT community trial. Additional investments to build organizational and community capacity may be required to create the optimal implementation setting and integration of forecasting tools.
Future wise : educating our children for a changing world
\"How to teach big understandings and the ideas that matter most. Everyone has an opinion about education, and teachers face pressures from Common Core content standards, high-stakes testing, and countless other directions. But how do we know what today's learners will really need to know in the future? Future Wise: Educating Our Children for a Changing World is a toolkit for approaching that question with new insight. There is no one answer to the question of what's worth teaching, but with the tools in this book, you'll be one step closer to constructing a curriculum that prepares students for whatever situations they might face in the future. K-12 teachers and administrators play a crucial role in building a thriving society. David Perkins, founding member and co-director of Project Zero at Harvard's Graduate School of Education, argues that curriculum is one of the most important elements of making students ready for the world of tomorrow. In Future Wise, you'll learn concepts, curriculum criteria, and techniques for prioritizing content so you can guide students toward the big understandings that matter. Understand how learners use knowledge in life after graduation Learn strategies for teaching critical thinking and addressing big questions Identify top priorities when it comes to disciplines and content areas Gain curriculum design skills that make the most of learning across the years of education Future Wise presents a brand new framework for thinking about education. Curriculum can be one of the hardest things for teachers and administrators to change, but David Perkins shows that only by reimagining what we teach can we lead students down the road to functional knowledge. Future Wise is the practical guidebook you need to embark on this important quest\"-- Provided by publisher.
A Review of Auto-Regressive Methods Applications to Short-Term Demand Forecasting in Power Systems
The paper conducts a literature review of applications of autoregressive methods to short-term forecasting of power demand. This need is dictated by the advancement of modern forecasting methods and their achievement in good forecasting efficiency in particular. The annual effectiveness of forecasting power demand for the Polish National Power Grid for the next day is approx. 1%; therefore, the main objective of the review is to verify whether it is possible to improve efficiency while maintaining the minimum financial outlays and time-consuming efforts. The methods that fulfil these conditions are autoregressive methods; therefore, the paper focuses on autoregressive methods, which are less time-consuming and, as a result, cheaper in development and applications. The prepared review ranks the forecasting models in terms of the forecasting effectiveness achieved in the literature on the subject, which enables the selection of models that may improve the currently achieved effectiveness of the transmission system operator. Due to the applied approach, a transparent set of forecasting methods and models was obtained, in addition to knowledge about their potential in the context of the needs for short-term forecasting of electricity demand in the national power system. The articles in which the MAPE error was used to assess the quality of short-term forecasts were analyzed. The investigation included 47 articles, several dozen forecasting methods, and 264 forecasting models. The articles date from 1997 and, apart from the autoregressive methods, also include the methods and models that use explanatory variables (non-autoregressive ones). The input data used come from the period 1998–2014. The analysis included 25 power systems located on four continents (Asia, Europe, North America, and Australia) that were published by 44 different research teams. The results of the review show that in the autoregressive methods applied to forecasting short-term power demand, there is a potential to improve forecasting effectiveness in power systems. The most promising prognostic models using the autoregressive approach, based on the review, include Fuzzy Logic, Artificial Neural Networks, Wavelet Artificial Neural Networks, Adaptive Neurofuse Inference Systems, Genetic Algorithms, Fuzzy Regression, and Data Envelope Analysis. These methods make it possible to achieve the efficiency of short-term forecasting of electricity demand with hourly resolution at the level below 1%, which confirms the assumption made by the authors about the potential of autoregressive methods. Other forecasting models, the effectiveness of which is high, may also prove useful in forecasting by electricity system operators. The paper also discusses the classical methods of Artificial Intelligence, Data Mining, Big Data, and the state of research in short-term power demand forecasting in power systems using autoregressive and non-autoregressive methods and models.
Risk terrain modeling : crime prediction and risk reduction
\"Risk terrain modeling (RTM) diagnoses the spatial attractors of criminal behavior and makes accurate predictions of where crime will occur at the micro-level. This book presents RTM as part of a larger risk management agenda that defines and measures crime problems; suggests ways in which they can be addressed through interventions; proposes measures for assessing effectiveness of treatment and sustainability of efforts; and offers suggestions for how police organizations can address vulnerabilities and exposures in the communities that they serve through strategies that go beyond specific deterrence of offenders. Technical and conceptual aspects of RTM are considered into the context of past criminological research, leading to a discussion of crime vulnerabilities and exposures, and the Theory of Risky Places. Then best practices for RTM, crime prediction, and risk reduction are set to ACTION. Case studies empirically demonstrate how RTM can be used to analyze the spatial dynamics of crime, allocate resources, and implement customized crime and risk reduction strategies that are transparent, measurable, and effective. Researchers and practitioners will learn how the combined factors that contribute to criminal behavior can be targeted, connections to crime can be monitored, spatial vulnerabilities can be assessed, and actions can be taken to reduce the worst effects\"--Provided by publisher.
Statistical learning for big dependent data
Master advanced topics in the analysis of large, dynamically dependent datasets with this insightful resource Statistical Learning with Big Dependent Data delivers a comprehensive presentation of the statistical and machine learning methods useful for analyzing and forecasting large and dynamically dependent data sets.
Development of a Deep Learning Model for Dynamic Forecasting of Blood Glucose Level for Type 2 Diabetes Mellitus: Secondary Analysis of a Randomized Controlled Trial
Type 2 diabetes mellitus (T2DM) is a major public health burden. Self-management of diabetes including maintaining a healthy lifestyle is essential for glycemic control and to prevent diabetes complications. Mobile-based health data can play an important role in the forecasting of blood glucose levels for lifestyle management and control of T2DM. The objective of this work was to dynamically forecast daily glucose levels in patients with T2DM based on their daily mobile health lifestyle data including diet, physical activity, weight, and glucose level from the day before. We used data from 10 T2DM patients who were overweight or obese in a behavioral lifestyle intervention using mobile tools for daily monitoring of diet, physical activity, weight, and blood glucose over 6 months. We developed a deep learning model based on long short-term memory-based recurrent neural networks to forecast the next-day glucose levels in individual patients. The neural network used several layers of computational nodes to model how mobile health data (food intake including consumed calories, fat, and carbohydrates; exercise; and weight) were progressing from one day to another from noisy data. The model was validated based on a data set of 10 patients who had been monitored daily for over 6 months. The proposed deep learning model demonstrated considerable accuracy in predicting the next day glucose level based on Clark Error Grid and ±10% range of the actual values. Using machine learning methodologies may leverage mobile health lifestyle data to develop effective individualized prediction plans for T2DM management. However, predicting future glucose levels is challenging as glucose level is determined by multiple factors. Future study with more rigorous study design is warranted to better predict future glucose levels for T2DM management.
Hidden dynamics of soccer leagues: The predictive ‘power’ of partial standings
Soccer leagues reflect the partial standings of the teams involved after each round of competition. However, the ability of partial league standings to predict end-of-season position has largely been ignored. Here we analyze historical partial standings from English soccer to understand the mathematics underpinning league performance and evaluate the predictive 'power' of partial standings. Match data (1995-2017) from the four senior English leagues was analyzed, together with random match scores generated for hypothetical leagues of equivalent size. For each season the partial standings were computed and Kendall's normalized tau-distance and Spearman r-values determined. Best-fit power-law and logarithmic functions were applied to the respective tau-distance and Spearman curves, with the 'goodness-of-fit' assessed using the R2 value. The predictive ability of the partial standings was evaluated by computing the transition probabilities between the standings at rounds 10, 20 and 30 and the final end-of-season standings for the 22 seasons. The impact of reordering match fixtures was also evaluated. All four English leagues behaved similarly, irrespective of the teams involved, with the tau-distance conforming closely to a power law (R2>0.80) and the Spearman r-value obeying a logarithmic function (R2>0.87). The randomized leagues also conformed to a power-law, but had a different shape. In the English leagues, team position relative to end-of-season standing became 'fixed' much earlier in the season than was the case with the randomized leagues. In the Premier League, 76.9% of the variance in the final standings was explained by round-10, 87.0% by round-20, and 93.9% by round-30. Reordering of match fixtures appeared to alter the shape of the tau-distance curves. All soccer leagues appear to conform to mathematical laws, which constrain the league standings as the season progresses. This means that partial standings can be used to predict end-of-season league position with reasonable accuracy.