Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
60,069 result(s) for "Ensemble"
Sort by:
A survey on ensemble learning
Despite significant successes achieved in knowledge discovery, traditional machine learning methods may fail to obtain satisfactory performances when dealing with complex data, such as imbalanced, high-dimensional, noisy data, etc. The reason behind is that it is difficult for these methods to capture multiple characteristics and underlying structure of data. In this context, it becomes an important topic in the data mining field that how to effectively construct an efficient knowledge discovery and mining model. Ensemble learning, as one research hot spot, aims to integrate data fusion, data modeling, and data mining into a unified framework. Specifically, ensemble learning firstly extracts a set of features with a variety of transformations. Based on these learned features, multiple learning algorithms are utilized to produce weak predictive results. Finally, ensemble learning fuses the informative knowledge from the above results obtained to achieve knowledge discovery and better predictive performance via voting schemes in an adaptive way. In this paper, we review the research progress of the mainstream approaches of ensemble learning and classify them based on different characteristics. In addition, we present challenges and possible research directions for each mainstream approach of ensemble learning, and we also give an extra introduction for the combination of ensemble learning with other machine learning hot spots such as deep learning, reinforcement learning, etc.
Brownian regularity for the Airy line ensemble, and multi-polymer watermelons in Brownian last passage percolation
The Airy line ensemble is a positive-integer indexed system of random continuous curves whose finite dimensional distributions are given by the multi-line Airy process. It is a natural object in the KPZ universality class: for example, its highest curve, the Airy In this paper, we employ the Brownian Gibbs property to make a close comparison between the Airy line ensemble’s curves after affine shift and Brownian bridge, proving the finiteness of a superpolynomially growing moment bound on Radon-Nikodym derivatives. We also determine the value of a natural exponent describing in Brownian last passage percolation the decay in probability for the existence of several near geodesics that are disjoint except for their common endpoints, where the notion of ‘near’ refers to a small deficit in scaled geodesic energy, with the parameter specifying this nearness tending to zero. To prove both results, we introduce a technique that may be useful elsewhere for finding upper bounds on probabilities of events concerning random systems of curves enjoying the Brownian Gibbs property. Several results in this article play a fundamental role in a further study of Brownian last passage percolation in three companion papers (Hammond 2017a,b,c), in which geodesic coalescence and geodesic energy profiles are investigated in scaled coordinates.
Louder and faster : pain, joy, and the body politic in Asian American taiko
\"Louder and Faster is a cultural study of the phenomenon of Asian American taiko, the thundering, athletic drumming tradition that originated in Japan. Immersed in the taiko scene for twenty years, Deborah Wong has witnessed cultural and demographic changes and the exponential growth and expansion of taiko particularly in southern California. Through her participatory ethnographic work, she reveals a complicated story embedded in memories of Japanese American internment and legacies of imperialism, Asian American identity and politics, a desire to be seen and heard, and the intersection of culture and global capitalism. Exploring the materialities of the drums, costumes, and bodies that make sound, analyzing the relationship of these to capitalist multiculturalism, and investigating the gender politics of taiko, Louder and Faster considers both the promises and pitfalls of music and performance as an antiracist practice. The result is a vivid glimpse of an Asian American presence that is both loud and fragile\"--Provided by publisher.
The Ensemble Approach to Forecasting: A Review and Synthesis
Ensemble forecasting is a modeling approach that combines data sources, models of different types, with alternative assumptions, using distinct pattern recognition methods. The aim is to use all available information in predictions, without the limiting and arbitrary choices and dependencies resulting from a single statistical or machine learning approach or a single functional form, or results from a limited data source. Uncertainties are systematically accounted for. Outputs of ensemble models can be presented as a range of possibilities, to indicate the amount of uncertainty in modeling. We review methods and applications of ensemble models both within and outside of transport research. The review finds that ensemble forecasting generally improves forecast accuracy, robustness in many fields, particularly in weather forecasting where the method originated. We note that ensemble methods are highly siloed across different disciplines, and both the knowledge and application of ensemble forecasting are lacking in transport. In this paper we review and synthesize methods of ensemble forecasting with a unifying framework, categorizing ensemble methods into two broad and not mutually exclusive categories, namely combining models, and combining data; this framework further extends to ensembles of ensembles. We apply ensemble forecasting to transport related cases, which shows the potential of ensemble models in improving forecast accuracy and reliability. This paper sheds light on the apparatus of ensemble forecasting, which we hope contributes to the better understanding and wider adoption of ensemble models.
Levenberg–Marquardt forms of the iterative ensemble smoother for efficient history matching and uncertainty quantification
The use of the ensemble smoother (ES) instead of the ensemble Kalman filter increases the nonlinearity of the update step during data assimilation and the need for iterative assimilation methods. A previous version of the iterative ensemble smoother based on Gauss–Newton formulation was able to match data relatively well but only after a large number of iterations. A multiple data assimilation method (MDA) was generally more efficient for large problems but lacked ability to continue “iterating” if the data mismatch was too large. In this paper, we develop an efficient, iterative ensemble smoother algorithm based on the Levenberg–Marquardt (LM) method of regularizing the update direction and choosing the step length. The incorporation of the LM damping parameter reduces the tendency to add model roughness at early iterations when the update step is highly nonlinear, as it often is when all data are assimilated simultaneously. In addition, the ensemble approximation of the Hessian is modified in a way that simplifies computation and increases stability. We also report on a simplified algorithm in which the model mismatch term in the updating equation is neglected. We thoroughly evaluated the new algorithm based on the modified LM method, LM-ensemble randomized maximum likelihood (LM-EnRML), and the simplified version of the algorithm, LM-EnRML (approx), on three test cases. The first is a highly nonlinear single-variable problem for which results can be compared against the true conditional pdf. The second test case is a one-dimensional two-phase flow problem in which the permeability of 31 grid cells is uncertain. In this case, Markov chain Monte Carlo results are available for comparison with ensemble-based results. The third test case is the Brugge benchmark case with both 10 and 20 years of history. The efficiency and quality of results of the new algorithms were compared with the standard ES (without iteration), the ensemble-based Gauss–Newton formulation, the standard ensemble-based LM formulation, and the MDA. Because of the high level of nonlinearity, the standard ES performed poorly on all test cases. The MDA often performed well, especially at early iterations where the reduction in data mismatch was quite rapid. The best results, however, were always achieved with the new iterative ensemble smoother algorithms, LM-EnRML and LM-EnRML (approx).