Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
208 result(s) for "Research libraries Forecasting."
Sort by:
The case for books : past, present, and future
\"The era of the printed book is at a crossroad. E-readers are flooding the market, books are available to read on cell phones, and companies such as Google, Amazon, and Apple are competing to command near monopolistic positions as sellers and dispensers of digital information. Is the printed book resilient enough to survive the digital revolution, or will it become obsolete? In this lasting collection of essays, Robert Darnton--an intellectual pioneer in the field of this history of the book--lends unique authority to the life, role, and legacy of the book in society.\"--P. 4 of cover.
The Case for Books
The era of the printed book is at a crossroad. E-readers are flooding the market, books are available to read on cell phones, and companies such as Google, Amazon, and Apple are competing to command near monopolistic positions as sellers and dispensers of digital information. Already, more books have been scanned and digitized than were housed in the great library in Alexandria. Is the printed book resilient enough to survive the digital revolution, or will it become obsolete? In this lasting collection of essays, Robert Darnton—an intellectual pioneer in the field of this history of the book—lends unique authority to the life, role, and legacy of the book in society.
Harmonized Emissions Component (HEMCO) 3.0 as a versatile emissions component for atmospheric models: application in the GEOS-Chem, NASA GEOS, WRF-GC, CESM2, NOAA GEFS-Aerosol, and NOAA UFS models
Emissions are a central component of atmospheric chemistry models. The Harmonized Emissions Component (HEMCO) is a software component for computing emissions from a user-selected ensemble of emission inventories and algorithms. It allows users to re-grid, combine, overwrite, subset, and scale emissions from different inventories through a configuration file and with no change to the model source code. The configuration file also maps emissions to model species with appropriate units. HEMCO can operate in offline stand-alone mode, but more importantly it provides an online facility for models to compute emissions at runtime. HEMCO complies with the Earth System Modeling Framework (ESMF) for portability across models. We present a new version here, HEMCO 3.0, that features an improved three-layer architecture to facilitate implementation into any atmospheric model and improved capability for calculating emissions at any model resolution including multiscale and unstructured grids. The three-layer architecture of HEMCO 3.0 includes (1) the Data Input Layer that reads the configuration file and accesses the HEMCO library of emission inventories and other environmental data, (2) the HEMCO Core that computes emissions on the user-selected HEMCO grid, and (3) the Model Interface Layer that re-grids (if needed) and serves the data to the atmospheric model and also serves model data to the HEMCO Core for computing emissions dependent on model state (such as from dust or vegetation). The HEMCO Core is common to the implementation in all models, while the Data Input Layer and the Model Interface Layer are adaptable to the model environment. Default versions of the Data Input Layer and Model Interface Layer enable straightforward implementation of HEMCO in any simple model architecture, and options are available to disable features such as re-gridding that may be done by independent couplers in more complex architectures. The HEMCO library of emission inventories and algorithms is continuously enriched through user contributions so that new inventories can be immediately shared across models. HEMCO can also serve as a general data broker for models to process input data not only for emissions but for any gridded environmental datasets. We describe existing implementations of HEMCO 3.0 in (1) the GEOS-Chem “Classic” chemical transport model with shared-memory infrastructure, (2) the high-performance GEOS-Chem (GCHP) model with distributed-memory architecture, (3) the NASA GEOS Earth System Model (GEOS ESM), (4) the Weather Research and Forecasting model with GEOS-Chem (WRF-GC), (5) the Community Earth System Model Version 2 (CESM2), and (6) the NOAA Global Ensemble Forecast System – Aerosols (GEFS-Aerosols), as well as the planned implementation in the NOAA Unified Forecast System (UFS). Implementation of HEMCO in CESM2 contributes to the Multi-Scale Infrastructure for Chemistry and Aerosols (MUSICA) by providing a common emissions infrastructure to support different simulations of atmospheric chemistry across scales.
The Consequences of Information Technology Control Weaknesses on Management Information Systems: The Case of Sarbanes-oxley Internal Control Reports
In this article, the association between the strength of information technology controls over management information systems and the subsequent forecasting ability of the information produced by those systems is investigated. The Sarbanes-Oxley Act of 2002 highlights the importance of information system controls by requiring management and auditors to report on the effectiveness of internal controls over the financial reporting component of the firm ' s management information systems. We hypothesize and find evidence that management forecasts are less accurate for firms with information technology material weaknesses in their financial reporting system than the forecasts for firms that do not have information technology material weaknesses. In addition, we examine three dimensions of information technology material weaknesses: data processing integrity, system access and security, and system structure and usage. We find that the association with forecast accuracy appears to be strongest for IT control weaknesses most directly related to data processing integrity. Our results support the contention that information technology controls, as apart of the management information system, affect the quality of the information produced by the system. We discuss the complementary nature of our findings to the information and systems quality literature.
Synthetic method of analogues for emerging infectious disease forecasting
The Method of Analogues (MOA) has gained popularity in the past decade for infectious disease forecasting due to its non-parametric nature. In MOA, the local behavior observed in a time series is matched to the local behaviors of several historical time series. The known values that directly follow the historical time series that best match the observed time series are used to calculate a forecast. This non-parametric approach leverages historical trends to produce forecasts without extensive parameterization, making it highly adaptable. However, MOA is limited in scenarios where historical data is sparse. This limitation was particularly evident during the early stages of the COVID-19 pandemic, where the emerging global epidemic had little-to-no historical data. In this work, we propose a new method inspired by MOA, called the Synthetic Method of Analogues (sMOA). sMOA replaces historical disease data with a library of synthetic data that describe a broad range of possible disease trends. This model circumvents the need to estimate explicit parameter values by instead matching segments of ongoing time series data to a comprehensive library of synthetically generated segments of time series data. We demonstrate that sMOA has competitive performance with state-of-the-art infectious disease forecasting models, out-performing 78% of models from the COVID-19 Forecasting Hub in terms of averaged Mean Absolute Error and 76% of models from the COVID-19 Forecasting Hub in terms of averaged Weighted Interval Score. Additionally, we introduce a novel uncertainty quantification methodology designed for the onset of emerging epidemics. Developing versatile approaches that do not rely on historical data and can maintain high accuracy in the face of novel pandemics is critical for enhancing public health decision-making and strengthening preparedness for future outbreaks.
An ARIMA-based study of bibliometric index prediction
Purpose>The purpose of this paper is to predict bibliometric indicators based on ARIMA models and to study the short-term trends of bibliometric indicators.Design/methodology/approach>This paper establishes a non-stationary time series ARIMA (p, d, q) model for forecasting based on the bibliometric index data of 13 journals in the library intelligence category selected from the Chinese Social Sciences Citation Index (CSSCI) as the data source database for the period 1998–2018, and uses ACF and PACF methods for parameter estimation to predict the development trend of the bibliometric index in the next 5 years. The predicted model was also subjected to error analysis.Findings>ARIMA models are feasible for predicting bibliometric indicators. The model predicted the trend of the four bibliometric indicators in the next 5 years, in which the number of publications showed a decreasing trend and the H-value, average citations and citations showed an increasing trend. Error analysis of the model data showed that the average absolute percentage error of the four bibliometric indicators was within 5%, indicating that the model predicted well.Research limitations/implications>This study has some limitations. 13 Chinese journals were selected in the field of Library and Information Science as the research objects. However, the scope of research based on bibliometric indicators of Chinese journals is relatively small and cannot represent the evolution trend of the entire discipline. Therefore, in the future, the authors will select different fields and different sources for further research.Originality/value>This study predicts the trend changes of bibliometric indicators in the next 5 years to understand the trend of bibliometric indicators, which is beneficial for further in-depth research. At the same time, it provides a new and effective method for predicting bibliometric indicators.
On Self-Selection Biases in Online Product Reviews
Online product reviews help consumers infer product quality, and the mean (average) rating is often used as a proxy for product quality. However, two self-selection biases, acquisition bias (mostly consumers with a favorable predisposition acquire a product and hence write a product review) and underreporting bias (consumers with extreme, either positive or negative, ratings are more likely to write reviews than consumers with moderate product ratings), render the mean rating a biased estimator of product quality, and they result in the well-known J-shaped (positively skewed, asymmetric, bimodal) distribution of online product reviews. To better understand the nature and consequences of these two self-selection biases, we analytically model and empirically investigate how these two biases originate from consumers’ purchasing and reviewing decisions, how these decisions shape the distribution of online product reviews over time, and how they affect the firm’s product pricing strategy. Our empirical results reveal that consumers do realize both self-selection biases and attempt to correct for them by using other distributional parameters of online reviews, besides the mean rating. However, consumers cannot fully account for these two self-selection biases because of bounded rationality. We also find that firms can strategically respond to these self-selection biases by adjusting their prices. Still, since consumers cannot fully correct for these two self-selection biases, product demand, the firm’s profit, and consumer surplus may all suffer from the two self-selection biases. This paper has implications for consumers to leverage online product reviews to infer true product quality, for commercial websites to improve the design of their online product review systems, and for product manufacturers to predict the success of their products.
Improving dust simulations in WRF-Chem v4.1.3 coupled with the GOCART aerosol module
In this paper, we rectify inconsistencies that emerge in the Weather Research and Forecasting model with chemistry (WRF-Chem) v3.2 code when using the Goddard Chemistry Aerosol Radiation and Transport (GOCART) aerosol module. These inconsistencies have been reported, and corrections have been implemented in WRF-Chem v4.1.3. Here, we use a WRF-Chem experimental setup configured over the Middle East (ME) to estimate the effects of these inconsistencies. Firstly, we show that the old version underestimates the PM2.5 diagnostic output by 7 % and overestimates PM10 by 5 % in comparison with the corrected one. Secondly, we demonstrate that submicron dust particles' contribution was incorrectly accounted for in the calculation of optical properties. Therefore, aerosol optical depth (AOD) in the old version was 25 %–30 % less than in the corrected one. Thirdly, we show that the gravitational settling procedure, in comparison with the corrected version, caused higher dust column loadings by 4 %–6 %, PM10 surface concentrations by 2 %–4 %, and mass of the gravitationally settled dust by 5 %–10 %. The cumulative effect of the found inconsistencies led to the significantly higher dust content in the atmosphere in comparison with the corrected WRF-Chem version. Our results explain why in many WRF-Chem simulations PM10 concentrations were exaggerated. We present the methodology for calculating diagnostics we used to estimate the impacts of introduced code modifications. We share the developed Merra2BC interpolator, which allows processing Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2) output for constructing initial and boundary conditions for chemical species and aerosols.
A review of scientific impact prediction: tasks, features and methods
With the rapid evolution of scientific research, there are a huge volume of papers published every year and the number of scholars is also growing fast. How to effectively predict the scientific impact has become an important research problem, attracting the attention of researchers in various fields, and it is of great significance in improving research efficiency and assisting in decision-making and scientific evaluation. In this paper, we propose a new framework to perform a systematical survey of scientific impact prediction research. Specifically, we take the four common academic entities into account: papers, scholars, venues and institutions. We reviewed all the prediction tasks reported in the literature in detail; the input features are divided into six groups: paper-related, author-related, venue-related, institution-related, network-related and altmetrics-related. Moreover, we classify the forecasting methods into mathematical statistics-based, traditional machine learning-based, deep learning-based and graph-based, and subdivide each category according to the characteristics. Finally, we discuss open issues and existing challenges, and provide potential research directions.