Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
4,845 result(s) for "Input data"
Sort by:
Combined Experimental and Field Data Sources in a Prediction Model for Corrosion Rate under Insulation
Corrosion under insulation (CUI) is one of the increasing industrial problems, especially in chemical plants that have been running for an extended time. Prediction modeling, which is one of the solutions for this issue, has attracted increasing attention and has been considered for several industrial applications. The main objective of this work was to investigate the effect of combined data input in prediction modeling, which could be applied to improve the existing CUI rate prediction model. Experimental data and field historical data were gathered and simulated using an artificial neural network separately. To analyze the effect of data sources on the final corrosion rate under the insulation prediction model, both sources of data from experiment and field data were then combined and simulated again using an artificial neural network. Results exhibited the advantages of combined input data type from the experiment and field in the final prediction model. The model developed clearly shows the occurrence of corrosion by phases, which are uniform corrosion at the early phases and pitting corrosion at the later phases. The prediction model will enable better mitigation actions in preventing loss of containment due to CUI, which in turn will improve overall sustainability of the plant.
Asset allocation efficiency from dynamic and static strategies in underfunded pension funds
This study attempts to conduct a comparative analysis between dynamic and static asset allocation to achieve the long-term target return on asset liability management (ALM). This study conducts asset allocation using the ex ante expected rate of return through the outlook of future economic indicators because past economic indicators or realized rate of returns which are used as input data for expected rate of returns in the \"building block\" method, most adopted by domestic pension funds, does not fully reflect the future economic situation. Vector autoregression is used to estimate and forecast long-term interest rates. Furthermore, it is applied to gross domestic product and consumer price index estimation because it is widely used in financial time series data. Based on asset allocation simulations, this study derived the following insights: first, economic indicator filtering and upper-lower bound computation is needed to reduce the expected return volatility. Second, to reach the ALM goal, more stocks should be allocated than low-yielding assets. Finally, dynamic asset allocation which has been mirroring economic changes actively has a higher annual yield and risk-adjusted return than static asset allocation.
Channel estimation and symbol detection for OFDM systems using data-nulling superimposed pilots
A novel data-nulling superimposed pilot scheme for orthogonal frequency division multiplexing (OFDM) systems is proposed, where the input data vector is spread over all the subcarriers by a precoding matrix and then nulled at certain subcarriers for the insertion of training pilots. This method avoids the loss of the data rate for frequency-division multiplexed pilots, but results in the distortion of input data. To mitigate the distortion introduced by the nulling operation, a simple iterative reconstruction scheme is used to improve the detection performance.
A data scientist's guide to acquiring, cleaning and managing data in R
The only how-to guide offering a unified, systemic approach to acquiring, cleaning, and managing data in R Every experienced practitioner knows that preparing data for modeling is a painstaking, time-consuming process.
Time-dependent MHD modeling of the global solar corona for year 2007: Driven by daily-updated magnetic field synoptic data
In this paper, we develop a time‐dependent MHD model driven by the daily‐updated synoptic magnetograms (MHD‐DUSM) to study the dynamic evolution of the global corona with the help of the 3D Solar‐Interplanetary (SIP) adaptive mesh refinement (AMR) space‐time conservation element and solution element (CESE) MHD model (SIP‐AMR‐CESE MHD Model). To accommodate the observations, the tangential component of the electric field at the lower boundary is specified to allow the flux evolution to match the observed changes of magnetic field. Meanwhile, the time‐dependent solar surface boundary conditions derived from the method of characteristics and the mass flux limit are incorporated to couple the observation and the 3D MHD model. The simulated evolution of the global coronal structure during 2007 is compared with solar observations and solar wind measurements from both Ulysses and spacecrafts near the Earth. The MHD‐DUSM model is also validated by comparisons with the standard potential field source surface (PFSS) model, the newly improved Wang‐Sheeley‐Arge (WSA) empirical formula, and the MHD simulation with a monthly synoptic magnetogram (MHD‐MSM). Comparisons show that the MHD‐DUSM results have good overall agreement with coronal and interplanetary structures, including the sizes and distributions of coronal holes, the positions and shapes of the streamer belts, and the transitions of the solar wind speeds and magnetic field polarities. The MHD‐DUSM results also display many features different from those of the PFSS, the WSA, and the MHD‐MSM models. Key Points A time‐dependent model is developed for the dynamic evolution of global corona The model is driven by the daily‐updated magnetic field synoptic data MHD results have good agreement with coronal and interplanetary observations
Input–Output Uncertainty Comparisons for Discrete Optimization via Simulation
Selecting the optimal policy using simulation is subject to input model risk when input models that mimic real-world randomness in the simulation have estimation error due to finite sample sizes. Instead of trying to find the optimal solution under unknown real-world input distributions by taking a conservative stance or with low statistical guarantee, the input–output uncertainty comparisons (IOU-C) procedure finds a set of solutions that cannot be separated from the best given the resolution decided by the finite sample sizes. The common-input-data (CID) effects measure how differently solutions are affected by the common estimated input models. When CID effects of two systems are positively correlated, the comparison becomes easier than estimating the performance measures of two systems precisely under input model risk; the IOU-C procedure takes advantage of the CID effects to develop a sharp comparison and thereby provides a small subset even in the presence of input model risk. When input distributions to a simulation model are estimated from real-world data, they naturally have estimation error causing input uncertainty in the simulation output. If an optimization via simulation (OvS) method is applied that treats the input distributions as “correct,” then there is a risk of making a suboptimal decision for the real world, which we call input model risk . This paper addresses a discrete OvS (DOvS) problem of selecting the real-world optimal from among a finite number of systems when all of them share the same input distributions estimated from common input data. Because input uncertainty cannot be reduced without collecting additional real-world data—which may be expensive or impossible—a DOvS procedure should reflect the limited resolution provided by the simulation model in distinguishing the real-world optimal solution from the others. In light of this, our input–output uncertainty comparisons (IOU-C) procedure focuses on comparisons rather than selection : it provides simultaneous confidence intervals for the difference between each system’s real-world mean and the best mean of the rest with any desired probability, while accounting for both stochastic and input uncertainty. To make the resolution as high as possible (intervals as short as possible) we exploit the common input data effect to reduce uncertainty in the estimated differences. Under mild conditions we prove that the IOU-C procedure provides the desired statistical guarantee asymptotically as the real-world sample size and simulation effort increase, but it is designed to be effective in finite samples. The electronic companion of this paper is available at https://doi.org/10.1287/opre.2018.1796 .
Short-term traffic flow prediction using seasonal ARIMA model with limited input data
Background Accurate prediction of traffic flow is an integral component in most of the Intelligent Transportation Systems (ITS) applications. The data driven approach using Box-Jenkins Autoregressive Integrated Moving Average (ARIMA) models reported in most studies demands sound database for model building. Hence, the applicability of these models remains a question in places where the data availability could be an issue. The present study tries to overcome the above issue by proposing a prediction scheme using Seasonal ARIMA (SARIMA) model for short term prediction of traffic flow using only limited input data. Method A 3-lane arterial roadway in Chennai, India was selected as the study stretch and limited flow data from only three consecutive days was used for the model development using SARIMA. After necessary differencing to make the input time series a stationary one, the autocorrelation function (ACF) and partial autocorrelation function (PACF) were plotted to identify the suitable order of the SARIMA model. The model parameters were found using maximum likelihood method in R. The developed model was validated by performing 24 hrs. ahead forecast and the predicted flows were compared with the actual flow values. A comparison of the proposed model with historic average and naive method was also attempted. The effect of increase in sample size of input data on prediction results was studied. Short term prediction of traffic flow during morning and evening peak periods was also attempted using both historic and real time data. Concluding remarks The mean absolute percentage error (MAPE) between actual and predicted flow was found to be in the range of 4–10, which is acceptable in most of the ITS applications. The prediction scheme proposed in this study for traffic flow prediction could be considered in situations where database is a major constraint during model development using ARIMA.
Google Earth Engine for Informal Settlement Mapping: A Random Forest Classification Using Spectral and Textural Information
Accurate and reliable informal settlement maps are fundamental decision-making tools for planning, and for expediting informed management of cities. However, extraction of spatial information for informal settlements has remained a mammoth task due to the spatial heterogeneity of urban landscape components, requiring complex analytical processes. To date, the use of Google Earth Engine platform (GEE), with cloud computing prowess, provides unique opportunities to map informal settlements with precision and enhanced accuracy. This paper leverages cloud-based computing techniques within GEE to integrate spectral and textural features for accurate extraction of the location and spatial extent of informal settlements in Durban, South Africa. The paper aims to investigate the potential and advantages of GEE’s innovative image processing techniques to precisely depict morphologically varied informal settlements. Seven data input models derived from Sentinel 2A bands, band-derived texture metrics, and spectral indices were investigated through a random forest supervised protocol. The main objective was to explore the value of different data input combinations in accurately mapping informal settlements. The results revealed that the classification based on spectral bands + textural information yielded the highest informal settlement identification accuracy (94% F-score). The addition of spectral indices decreased mapping accuracy. Our results confirm that the highest spatial accuracy is achieved with the ‘textural features’ model, which yielded the lowest root-mean-square log error (0.51) and mean absolute percent error (0.36). Our approach highlights the capability of GEE’s complex integrative data processing capabilities in extracting morphological variations of informal settlements in rugged and heterogeneous urban landscapes, with reliable accuracy.
Modification of computer-aided modelling input data based on medium-scale fire tests of wooden beams
The aim of the paper was to optimize the settings of the material properties of a computer model describing heat transfer in a wooden beam exposed to thermal loading from a porcelain radiation panel. The methodology was based on performing medium-scale fire tests as a basis for a creation of finite element model with 6 different setups of material characteristics based on the outputs of tests. When adjusting the settings, the T-history method was used to determine a beginning and end of a phase change of the water content in the wood, a thermal conductivity was adjusted based on a density and a moisture content, and enthalpy was used instead of a specific heat. The results of the simulations were compared with the real medium-scale fire tests, which showed the importance of adjusting the input data. Based on the T-history method, the setting with a thermal conductivity value of 0.35 W·m-1·K-1 at a temperature of 114.8 °C was shown to be the best, with a coefficient of determination 98.7%.The results of the simulations showed that there could be a correlation between the moisture content of the wood and the maximum value of the thermal conductivity of the wood in the phase change of water.