Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
19,087 result(s) for "performance modeling"
Sort by:
Performance prediction of parallel computing models to analyze cloud-based big data applications
Performance evaluation of cloud center is a necessary prerequisite to fulfilling contractual quality of service, particularly in big data applications. However, effectively evaluating performance of cloud services is challenging due to the complexity of cloud services and the diversity of big data applications. In this paper, we propose a performance evaluation model for parallel computing models deployed in cloud centers to support big data applications. In this evaluation model, a big data application is divided into lots of parallel tasks and the task arrivals follow a general distribution. In our approach, we also consider factors associated with resource heterogeneity, resource contention among cloud nodes, and data storage strategy, which have an impact on the performance of parallel computing models. Our model also allows us to calculate key performance indicators of cloud center such as mean number of tasks in the system, probability that a task obtains immediate service, and task waiting time. The model can also be used to predict the time of performing applications. We then demonstrate the utility of the model based on simulations and benchmarking using WordCount and TeraSort applications.
CATS: A high‐performance software framework for simulating plant migration in changing environments
Considering local population dynamics and dispersal is crucial to project species' range adaptations in changing environments. Dynamic models including these processes are highly computer intensive, with consequent restrictions on spatial extent and/or resolution. We present CATS, an open‐source, extensible modelling framework for simulating spatially and temporarily explicit population dynamics of plants. It can be used in conjunction with species distribution models, or via direct parametrisation of vital rates and allows for fine‐grained control over the demographic and dispersal processes' models. The performance and flexibility of CATS is exemplified (i) by modelling the range shift of four plant species under three future climate scenarios across Europe at a spatial resolution of 100 m., and (ii) by exploring consequences of demographic compensation for range expansion on artificial landscapes. The presented software attempts to leverage the availability of computational resources and lower the barrier of entry for large‐extent, fine‐resolution simulations of plant range shifts in changing environments.
Predictive athlete performance modeling with machine learning and biometric data integration
The Purpose of this study is to propose a new integrative framework for athletic performance prediction based on state-of-the-art machine learning analysis and biometric data biometric scanning. By merging physiological signals i.e., Heart rate variability, oxygen consumption, muscle activation patterns, with psychological signals i.e., mental toughness, athlete engagement, group cohesion along with contextual training data, we create a hybrid model that performs superiorly as compared to traditional unidimensional models. Our exercisers were trained using a gradient boosting and neural network to learn the very complex non-linear relationships that exist between the physical and the psychological performance drivers. With a rich sample set of 480 athletes from different sports, the proposed model achieved 90% accuracy (R 2  = 0.90) in predicting performance outcomes and outdid the conventional methods with statistical approaches R 2  = 0.77) and machine learning based methods (R 2  = 0.77) And machine whiz methods. The robust results achieved from the model (over 90%) as compared to conventional nor statistical methods. From the analysis of the features’ importance, the strongest predictor of performance are the Provided Dedicated Athletes’ scores of the Functional Movement Screening (13.7%), athlete dedication (11.5%), maximum acceleration capabilities (10.2%), which verify the relationship along biomechanical, preconceived explosive power and psychological commitment. This reflects the finding whereby deep categorized athletic talent prediction requires a multi-dimensional approach by sophisticated fusion techniques. The framework is useful to coaches and sports scientists because it allows for the individualized design of injury risk mitigation and physiologically and psychologically-focused interpersonal help. This approach integrates multiple factors and constitutes an important progress in sports analytics by providing a comprehensive perspective on the complex realities which influence elite athletic performance.
Phase-Level Analysis and Forecasting of System Resources in Edge Device Cryptographic Algorithms
With the accelerated growth of the Internet of Things (IoT), real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local. Since these devices often handle personal information under limited resources, cryptographic algorithms must be executed efficiently. Their computational characteristics strongly affect system performance, making it necessary to analyze resource impact and predict usage under diverse configurations. In this paper, we analyze the phase-level resource usage of AES variants, ChaCha20, ECC, and RSA on an edge device and develop a prediction model. We apply these algorithms under varying parallelism levels and execution strategies across key generation, encryption, and decryption phases. Based on the analysis, we train a unified Random Forest model using execution context and temporal features, achieving R2 values up to 0.994 for power and 0.988 for temperature. Furthermore, the model maintains practical predictive performance even for cryptographic algorithms not included during training, demonstrating its ability to generalize across distinct computational characteristics. Our proposed approach reveals how execution characteristics and resource usage interacts, supporting proactive resource planning and efficient deployment of cryptographic workloads on edge devices. As our approach is grounded in phase-level computational characteristics rather than in any single algorithm, it provides generalizable insights that can be extended to a broader range of cryptographic algorithms that exhibit comparable phase-level execution patterns and to heterogeneous edge architectures.
Model-based performance prediction in software development: a survey
Over the last decade, a lot of research has been directed toward integrating performance analysis into the software development process. Traditional software development methods focus on software correctness, introducing performance issues later in the development process. This approach does not take into account the fact that performance problems may require considerable changes in design, for example, at the software architecture level, or even worse at the requirement analysis level. Several approaches were proposed in order to address early software performance analysis. Although some of them have been successfully applied, we are still far from seeing performance analysis integrated into ordinary software development. In this paper, we present a comprehensive review of recent research in the field of model-based performance prediction at software development time in order to assess the maturity of the field and point out promising research directions.
On‐line extreme learning algorithm based identification and non‐linear model predictive controller for way‐point tracking application of an autonomous underwater vehicle
In most of the surveillance applications of autonomous underwater vehicle (AUV), very often it is intended to follow the desired horizontal way‐points, where some oceanography data need to be collected. In view of this, the motion planning algorithm using way‐points is investigated in this study. The proposed work involves identification of dynamics of AUV and design of adaptive model predictive controllers which includes linear adaptive model predictive controller (LAMPC) and non‐linear adaptive model predictive controller (NAMPC). Owing to the fast convergence rate and robustness property, on‐line sequential extreme learning machine (OS‐ELM) is employed for estimating the dynamics of AUV. To improve the OS‐ELM modelling performance, Jaya optimisation algorithm is applied to optimise the hidden layer parameters. The desired surveillance region is formulated in terms of way‐points using heading angle obtained from desired line‐of‐sight path. Simulations are performed using MATLAB by applying proposed NAMPC, LAMPC and a previously reported optimal controller, namely inverse optimal self‐tuning PID (IOSPID) controller. Subsequently, real‐time experimentation is performed using a prototype AUV in a swimming pool. From the simulation and experimental results, it is observed that the proposed controller exhibit efficient tracking performance in face of actuator constraints as compared to LAMPC and IOSPID controller.
Advancing deep learning for expressive music composition and performance modeling
The pursuit of expressive and human-like music generation remains a significant challenge in the field of artificial intelligence (AI). While deep learning has advanced AI music composition and transcription, current models often struggle with long-term structural coherence and emotional nuance. This study presents a comparative analysis of three leading deep learning architectures: Long Short-Term Memory (LSTM) networks, Transformer models, and Generative Adversarial Networks (GANs), for AI-generated music composition and transcription using the MAESTRO dataset. Our key innovation lies in the integration of a dual evaluation framework that combines objective metrics (perplexity, harmonic consistency, and rhythmic entropy) with subjective human evaluations via a Mean Opinion Score (MOS) study involving 50 listeners. The Transformer model achieved the best overall performance (perplexity: 2.87, harmonic consistency: 79.4%, MOS: 4.3), indicating its superior ability to produce musically rich and expressive outputs. However, human compositions remained highest in perceptual quality (MOS: 4.8). Our findings provide a benchmarking foundation for future AI music systems and emphasize the need for emotion-aware modeling, real-time human-AI collaboration, and reinforcement learning to bridge the gap between machine-generated and human-performed music.
Quantitative approach to collaborative learning: performance prediction, individual assessment, and group composition
The benefits of collaborative learning, although widely reported, lack the quantitative rigor and detailed insight into the dynamics of interactions within the group, while individual contributions and their impacts on group members and their collaborative work remain hidden behind joint group assessment. To bridge this gap we intend to address three important aspects of collaborative learning focused on quantitative evaluation and prediction of group performance. First, we use machine learning techniques to predict group performance based on the data of member interactions and thereby identify whether, and to what extent, the group’s performance is driven by specific patterns of learning and interaction. Specifically, we explore the application of Extreme Learning Machine and Classification and Regression Trees to assess the predictability of group academic performance from live interaction data. Second, we propose a comparative model to unscramble individual student performances within the group. These performances are then used further in a generative mixture model of group grading as an explicit combination of isolated individual student grade expectations and compared against the actual group performances to define what we coined as collaboration synergy - directly measuring the improvements of collaborative learning. Finally the impact of group composition of gender and skills on learning performance and collaboration synergy is evaluated. The analysis indicates a high level of predictability of group performance based solely on the style and mechanics of collaboration and quantitatively supports the claim that heterogeneous groups with the diversity of skills and genders benefit more from collaborative learning than homogeneous groups.
Adventures Beyond Amdahl’s Law: How Power-Performance Measurement and Modeling at Scale Drive Server and Supercomputer Design
Amdahl’s Law painted a bleak picture for large-scale computing. The implication was that parallelism was limited and therefore so was potential speedup. While Amdahl’s contribution was seminal and important, it drove others vested in parallel processing to define more clearly why large-scale systems are critical to our future and how they fundamentally provide opportunities for speedup beyond Amdahl’s predictions. In the early 2000s, much like Amdahl, we predicted dire consequences for large-scale systems due to power limits. While our early work was often dismissed, the implications were clear to some: power would ultimately limit performance. In this retrospective, we discuss how power-performance measurement and modeling at scale led to contributions that have driven server and supercomputer design for more than a decade. While the influence of these techniques is now indisputable, we discuss their connections, limits and additional research directions necessary to continue the performance gains our industry is accustomed to.
Performance Estimation using the Fitness-Fatigue Model with Kalman Filter Feedback
Tracking and predicting the performance of athletes is of great interest, not only in training science but also, increasingly, for serious hobbyists. The increasing availability and use of smart watches and fitness trackers means that abundant data is becoming available, and the interest to optimally use this data for performance tracking and training optimization is great. One competitive model in this domain is the 3-time-constant fitness-fatigue model by Busso based on the model by Banister and colleagues. In the following, we will show that this model can be written equivalently as a linear, time-variant state-space model. With this understanding, it becomes clear that all methods for optimum tracking in statespace models are also directly applicable here. As an example, we show how a Kalman filter can be combined with the fitness-fatigue model in a mathematically consistent fashion. This gives us the opportunity to optimally consider measurements of performance to adapt the fitness and fatigue estimates in a datadriven manner. Results show that this approach is capable of clearly improving performance tracking and prediction over a range of different scenarios.