Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
180,375 result(s) for "mechanism"
Sort by:
Making Machines with Springs
Why is a spring like a simple machine? What forces do you need for a spring to change shape? How do springs store energy? Look at everything from historical examples of springs, such as a ballista, to the role of levers in complex machines, such as racing cars.
Hydroformer: Frequency Domain Enhanced Multi‐Attention Transformer for Monthly Lake Level Reconstruction With Low Data Input Requirements
Lake level changes are critical indicators of hydrological balance and climate change, yet long‐term monthly lake level reconstruction is challenging with incomplete or short‐term data. Data‐driven models, while promising, struggle with nonstationary lake level changes and complex dependencies on meteorological factors, limiting their applicability. Here, we introduce the Hydroformer, a frequency domain enhanced multi‐attention Transformer model designed for monthly lake level reconstruction, utilizing reanalysis data. This model features two innovative mechanisms: (a) Frequency‐Enhanced Attention (FEA) for capturing long‐term temporal dependence, and (b) Causality‐based Cross‐dimensional Attention (CCA) to elucidate how specific meteorological factors influence lake level. Seasonal and trend patterns of catchment meteorological factors and lake level are initially identified by a time series decomposition block, then independently learned and refined within the model. Tested across 50 lakes globally, the Hydroformer excelled in reconstruction periods ranging from half to three times the training‐test length. The model exhibited good performance even when training data missing rates were below 50%, particularly in lakes with significant seasonal fluctuations. The Hydroformer demonstrated robust generalization across lakes of varying sizes, from 10.11 to 18,135 km2, with median values for R2, MAE, MSE, and RMSE at 0.813, 0.313, 0.215, and 0.4, respectively. Furthermore, the Hydroformer outperformed data‐driven models, improving MSE by 29.2% and MAE by 24.4% compared to the next best model, the FEDformer. Our method proposes a novel approach for reconstructing long‐term water level changes and managing lake resources under climate change. Plain Language Summary Lake water levels, as key indicators of hydrologic dynamics and catchment balance, are vital for understanding climate impacts and managing water resources. However, the lack of continuous measurements for most global lakes, combined with the inability of traditional data‐driven models to effectively decipher complex interactions with catchment hydrological processes, leads to significant gaps in generalizability, accuracy, and reconstructive length. Given these limitations, accurate monthly reconstructions of lake level remain a persistent challenge. To address this, we develop Hydroformer, an innovative frequency domain enhanced multi‐attention Transformer model, utilizing reanalysis data for monthly lake level reconstruction. It employs two innovative attention mechanisms: Frequency‐Enhanced Attention for capturing long‐term temporal dependencies and Causality‐based Cross‐dimensional Attention for cross‐dimensional causal dependencies between catchment meteorological factors and lake level. Through a decomposition block, the model efficiently recognizes and refines inherent seasonal and trend patterns, leading to a comprehensive understanding of lake behaviors. Through testing on 50 global lakes, the Hydroformer has exhibited exceptional performance in reconstructing water levels for lakes ranging from 10.11 to 18,135 km2, adeptly handling short‐term, long‐term, and varying proportions of data gaps. It notably outperforms supervised data‐driven models. This positions it as a vital instrument for monthly lake level reconstruction, showcasing the power of integrating advanced artificial intelligence techniques in hydrological modeling. Key Points A novel frequency domain enhanced multi‐attention Transformer model, Hydroformer, has been built for reconstructing monthly lake level using reanalysis data The model accurately extends reconstructions 2–3 times the training data length, excelling with less than 50% missing training data Hydroformer surpasses advanced AI‐based models, improving MSE and MAE by over 20% and demonstrating strong generalization across lakes of varying sizes
In search of mechanisms : discoveries across the life sciences
With In Search of Mechanisms, Carl F. Craver and Lindley Darden offer both a descriptive and an instructional account of how biologists discover mechanisms. Drawing on examples from across the life sciences and through the centuries, Craver and Darden compile an impressive toolbox of strategies that biologists have used and will use again to reveal the mechanisms that produce, underlie, or maintain the phenomena characteristic of living things. They discuss the questions that figure in the search for mechanisms, characterizing the experimental, observational, and conceptual considerations used to answer them, all the while providing examples from the history of biology to highlight the kinds of evidence and reasoning strategies employed to assess mechanisms. At a deeper level, Craver and Darden pose a systematic view of what biology is, of how biology makes progress, of how biological discoveries are and might be made, and of why knowledge of biological mechanisms is important for the future of the human species. -- Publisher website.
The Chlamydomonas CO2-concentrating mechanism and its potential for engineering photosynthesis in plants
To meet the food demands of a rising global population, innovative strategies are required to increase crop yields. Improvements in plant photosynthesis by genetic engineering show considerable potential towards this goal. One prospective approach is to introduce a CO2-concentrating mechanism into crop plants to increase carbon fixation by supplying the central carbon-fixing enzyme, Rubisco, with a higher concentration of its substrate, CO2. A promising donor organism for the molecular machinery of this mechanism is the eukaryotic alga Chlamydomonas reinhardtii. This review summarizes the recent advances in our understanding of carbon concentration in Chlamydomonas, outlines the most pressing gaps in our knowledge and discusses strategies to transfer a CO2-concentrating mechanism into higher plants to increase photosynthetic performance.
A robotic framework for the mobile manipulator : theory and application
\"This book helps readers visualize an end-to-end workflow for making a robot system work in a targeted environment. It is considered as a bridge from theories to real products, in which robotic software modules and the robotic system integration are mainly concerned\"-- Provided by publisher.
Graph neural networks in node classification: survey and evaluation
Neural networks have been proved efficient in improving many machine learning tasks such as convolutional neural networks and recurrent neural networks for computer vision and natural language processing, respectively. However, the inputs of these deep learning paradigms all belong to the type of Euclidean structure, e.g., images or texts. It is difficult to directly apply these neural networks to graph-based applications such as node classification since graph is a typical non-Euclidean structure in machine learning domain. Graph neural networks are designed to deal with the particular graph-based input and have received great developments because of more and more research attention. In this paper, we provide a comprehensive review about applying graph neural networks to the node classification task. First, the state-of-the-art methods are discussed and divided into three main categories: convolutional mechanism, attention mechanism and autoencoder mechanism. Afterward, extensive comparative experiments are conducted on several benchmark datasets, including citation networks and co-author networks, to compare the performance of different methods with diverse evaluation metrics. Finally, several suggestions are provided for future research based on the experimental results.
A comparative analysis of joint clearance effects on articulated and partly compliant mechanisms
Clearance is inevitable in the articulated mechanisms due primarily to the design, manufacturing and assembly processes or a wear effect. This phenomenon affects the kinematic and dynamic performances of mechanism negatively. Compliant mechanism, which consists of at least one flexible member along with the conventional rigid links, becomes a favorable choice to decrease the number of movable joints and also their clearance effects. In this study, conventional and compliant slider–crank mechanisms having joints with clearance are used to investigate and compare the effects of joint clearance. Pseudo-rigid-body model of compliant mechanism is constituted. For the case of different clearance sizes and running speeds, kinematic and dynamic performances of mechanisms are compared to each other. The results show that the joint clearance leads to chaotic behavior on kinematic and dynamic outputs of mechanism. The flexibility of small-length flexural pivot, that is, pseudo-joint, has clear suspension effects to decrease the undesired reflections of joints clearance on the system outputs. Also, this pseudo-joint constitutes a force-closed kinematic pair behavior between journal and bearing in joint having clearance. This leads to continuous contact mode by preventing the separation of journal and bearing parts.
EQUIVALENCE OF STOCHASTIC AND DETERMINISTIC MECHANISMS
We consider a general social choice environment that has multiple agents, a finite set of alternatives, independent types, and atomless type distribution. We show that for any Bayesian incentive compatible mechanism, there exists an equivalent deterministic mechanism that (1) is Bayesian incentive compatible; (2) delivers the same interim expected allocation probabilities and the same interim expected utilities for all agents; and (3) delivers the same ex ante expected social surplus. This result holds in settings with a rich class of utility functions, multidimensional types, interdependent valuations, and in settings without monetary transfers. To prove our result, we develop a novel methodology of mutual purification, and establish its link with the mechanism design literature.