Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
378
result(s) for
"Business forecasting Data processing."
Sort by:
Fuzzy logic for business, finance, and management
by
Bojadziev, George
,
Bojadziev, Maria
in
Artificial Intelligence (Machine Learning, Neural Networks, Fuzzy Logic)
,
Business forecasting
,
Computational Economics
2007
This is truly an interdisciplinary book for knowledge workers in business, finance, management and socio-economic sciences based on fuzzy logic. It serves as a guide to and techniques for forecasting, decision making and evaluations in an environment involving uncertainty, vagueness, impression and subjectivity. Traditional modeling techniques, contrary to fuzzy logic, do not capture the nature of complex systems especially when humans are involved. Fuzzy logic uses human experience and judgement to facilitate plausible reasoning in order to reach a conclusion. Emphasis is on applications presented in the 27 case studies including Time Forecasting for Project Management, New Product Pricing, and Control of a Parasit-Pest System.
Fuzzy Logic for Business, Finance, and Management
by
Bojadziev, George
in
Business forecasting -- Data processing
,
Decision making -- Data processing
,
Entscheidungstheorie
2007
Key Features:No prior knowledge of fuzzy logic is neededNotes offered after each chapter include historical and philosophical remarks which put the topics in a wider contextA multi-purpose book that is accessible to students and of interest to expertsSome original results are presented.
Publication
Forecasting and operational research: a review
by
Crone, S F
,
Nikolopoulos, K
,
Syntetos, A A
in
Analytical forecasting
,
Applied sciences
,
Business and Management
2008
From its foundation, operational research (OR) has made many substantial contributions to practical forecasting in organizations. Equally, researchers in other disciplines have influenced forecasting practice. Since the last survey articles in JORS, forecasting has developed as a discipline with its own journals. While the effect of this increased specialization has been a narrowing of the scope of OR's interest in forecasting, research from an OR perspective remains vigorous. OR has been more receptive than other disciplines to the specialist research published in the forecasting journals, capitalizing on some of their key findings. In this paper, we identify the particular topics of OR interest over the past 25 years. After a brief summary of the current research in forecasting methods, we examine those topic areas that have grabbed the attention of OR researchers: computationally intensive methods and applications in operations and marketing. Applications in operations have proved particularly important, including the management of inventories and the effects of sharing forecast information across the supply chain. The second area of application is marketing, including customer relationship management using data mining and computer-intensive methods. The paper concludes by arguing that the unique contribution that OR can continue to make to forecasting is through developing models that link the effectiveness of new forecasting methods to the organizational context in which the models will be applied. The benefits of examining the system rather than its separate components are likely to be substantial.
Journal Article
Big Data: New Tricks for Econometrics
2014
Computers are now involved in many economic transactions and can capture data associated with these transactions, which can then be manipulated and analyzed. Conventional statistical and econometric techniques such as regression often work well, but there are issues unique to big datasets that may require different tools. First, the sheer size of the data involved may require more powerful data manipulation tools. Second, we may have more potential predictors than appropriate for estimation, so we need to do some kind of variable selection. Third, large datasets may allow for more flexible relationships than simple linear models. Machine learning techniques such as decision trees, support vector machines, neural nets, deep learning, and so on may allow for more effective ways to model complex relationships. In this essay, I will describe a few of these tools for manipulating and analyzing big data. I believe that these methods have a lot to offer and should be more widely known and used by economists.
Journal Article
EGCN: Entropy-based graph convolutional network for anomalous pattern detection and forecasting in real estate markets
by
Nguyen, Quang
,
Le, Dat
,
Rajasegarar, Sutharshan
in
Accuracy
,
Anomalies
,
Artificial neural networks
2025
Real estate markets are inherently dynamic, influenced by economic fluctuations, policy changes and socio-demographic shifts, often leading to emergence of anomalous—regions, where market behavior significantly deviates from expected trends. Traditional forecasting models struggle to handle such anomalies, resulting in higher errors and reduced prediction stability. In order to address this challenge, we propose EGCN, a novel cluster-specific forecasting framework that first detects and clusters anomalous regions separately from normal regions, and then applies forecasting models. This structured approach enables predictive models to treat normal and anomalous regions independently, leading to enhanced market insights and improved forecasting accuracy. Our evaluations on the UK, USA, and Australian real estate market datasets demonstrates that the EGCN achieves the lowest error both anomaly-free (baseline) methods and alternative anomaly detection methods, across all forecasting horizons (12, 24, and 48 months). In terms of anomalous region detection, our EGCN identifies 182 anomalous regions in Australia, 117 in the UK and 34 in the US, significantly more than the other competing methods, indicating superior sensitivity to market deviations. By clustering anomalies separately, forecasting errors are reduced across all tested forecasting models. For instance, when applying Neural Hierarchical Interpolation for Time Series Forecasting, the EGCN improves accuracy across forecasting horizons. In short-term forecasts (12 months), it reduces MSE from 1.3 to 1.0 in the US, 9.7 to 6.4 in the UK and 2.0 to 1.7 in Australia. For mid-term forecasts (24 months), EGCN achieves the lowest errors, lowering MSE from 3.1 to 2.3 (US), 14.2 to 9.0 (UK), and 4.5 to 4.0 (Australia). Even in long-term forecasts (48 months), where error accumulation is common, EGCN remains stable; decreasing MASE from 6.9 to 5.3 (US), 12.2 to 8.5 (UK), and 16.0 to 15.2 (Australia), highlighting its robustness over extended periods. These results highlight how separately clustering anomalies allows forecasting models to better capture distinct market behaviors, ensuring more precise and risk-adjusted predictions.
Journal Article
A data-driven forecasting approach for newly launched seasonal products by leveraging machine-learning approaches
by
Firdolas, Efendigil Tugba
,
Kharfan Majd
,
Chan Vicky Wing Kei
in
Customers
,
Economic forecasting
,
Fashion
2021
Companies in the fashion industry are struggling with forecasting demand due to the short-selling season, long lead times between the operations, huge product variety and ambiguity of demand information. The forecasting process is becoming more complicated by virtue of evolving retail technology trends. Demand volatility and speed are highly affected by e-commerce strategies as well as social media usage regards to varying customer preferences, short product lifecycles, obsolescence of the retail calendar, and lack of information for newly launched seasonal items. Consumers have become more demanding and less predictable in their purchasing behavior that expects high quality, guaranteed availability and fast delivery. Meeting high expectations of customers’ initiates with proper demand management. This study focuses on demand prediction with a data-driven perspective by both leveraging machine learning techniques and identifying significant predictor variables to help fashion retailers achieve better forecast accuracy. Prediction results obtained were compared to present the benefits of machine learning approaches. The proposed approach was applied by a leading fashion retail company to forecast the demand of newly launched seasonal products without historical data.
Journal Article
The Consequences of Information Technology Control Weaknesses on Management Information Systems: The Case of Sarbanes-oxley Internal Control Reports
by
Richardson, Vernon J.
,
Li, Chan
,
Watson, Marcia Weidenmier
in
Accuracy
,
Analytical forecasting
,
Data processing
2012
In this article, the association between the strength of information technology controls over management information systems and the subsequent forecasting ability of the information produced by those systems is investigated. The Sarbanes-Oxley Act of 2002 highlights the importance of information system controls by requiring management and auditors to report on the effectiveness of internal controls over the financial reporting component of the firm ' s management information systems. We hypothesize and find evidence that management forecasts are less accurate for firms with information technology material weaknesses in their financial reporting system than the forecasts for firms that do not have information technology material weaknesses. In addition, we examine three dimensions of information technology material weaknesses: data processing integrity, system access and security, and system structure and usage. We find that the association with forecast accuracy appears to be strongest for IT control weaknesses most directly related to data processing integrity. Our results support the contention that information technology controls, as apart of the management information system, affect the quality of the information produced by the system. We discuss the complementary nature of our findings to the information and systems quality literature.
Journal Article
Intelligent Productivity Transformation: Corporate Market Demand Forecasting With the Aid of an AI Virtual Assistant
2024
With the penetration of deep learning technology into forecasting and decision support systems, enterprises have an increasingly urgent need for accurate forecasting of time series data. Especially in fields such as finance, retail, and production, immediate and accurate predictions of market trends are the key to maintaining a competitive advantage. This study aims to address the limitations of traditional time series forecasting methods, such as the difficulty in adapting to the nonlinearity and non-stationarity of the data, through an innovative deep learning framework. The authors propose a Prophet model that combines deep learning with LSTNet and statistics. In this way, they combine the ability of LSTNet to handle complex time dependencies and the flexibility of the Prophet model to handle trends and periodicity. The particle swarm optimization algorithm (PSO) is responsible for tuning this hybrid model, aiming to improve the accuracy of predictions. Such a strategy not only helps capture long-term dependencies in time series, but also models seasonality and holiday effects well.
Journal Article
Ensembles of Overfit and Overconfident Forecasts
by
Lichtendahl, Kenneth C.
,
Jose, Victor Richmond R.
,
Grushka-Cockayne, Yael
in
Algorithms
,
Averages
,
base-rate neglect
2017
Firms today average forecasts collected from multiple experts and models. Because of cognitive biases, strategic incentives, or the structure of machine-learning algorithms, these forecasts are often overfit to sample data and are overconfident. Little is known about the challenges associated with aggregating such forecasts. We introduce a theoretical model to examine the combined effect of overfitting and overconfidence on the average forecast. Their combined effect is that the mean and median probability forecasts are poorly calibrated with hit rates of their prediction intervals too high and too low, respectively. Consequently, we prescribe the use of a trimmed average, or trimmed opinion pool, to achieve better calibration. We identify the random forest, a leading machine-learning algorithm that pools hundreds of overfit and overconfident regression trees, as an ideal environment for trimming probabilities. Using several known data sets, we demonstrate that trimmed ensembles can significantly improve the random forest’s predictive accuracy.
This paper was accepted by James Smith, decision analysis
.
Journal Article