Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
6,739 result(s) for "Scheduling (computing)"
Sort by:
Virtual QPU: A Novel Implementation of Quantum Computing
The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years. Nevertheless, the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity. In order to meet the needs of an increasing number of researchers, it is imperative to facilitate efficient and flexible access to computing resources in a cloud environment. In this paper, we propose a novel quantum computing paradigm, Virtual QPU (VQPU), which addresses this issue and enhances quantum cloud throughput with guaranteed circuit fidelity. The proposal introduces three innovative concepts: (1) The integration of virtualization technology into the field of quantum computing to enhance quantum cloud throughput. (2) The introduction of an asynchronous execution of circuits methodology to improve quantum computing flexibility. (3) The development of a virtual QPU allocation scheme for quantum tasks in a cloud environment to improve circuit fidelity. The concepts have been validated through the utilization of a self-built simulated quantum cloud platform.
A DLT-Aware Performance Evaluation Framework for Virtual-Core Speedup Modeling
Scheduling computing is a well-studied area focused on improving task execution by reducing processing time and increasing system efficiency. Divisible Load Theory (DLT) provides a structured analytical framework for distributing partitionable computational and communicational loads across processors, and its adaptability has allowed researchers to integrate it with other models and modern technologies. Building on this foundation, previous studies have shown that Amdahl-like laws can be effectively combined with DLT to produce more realistic performance models. This paper further develops analytical models that further extend such integration by incorporating Gustafson’s Law and Juurlink’s Law into DLT to capture broader scaling behaviors. It also extends the analysis to workload distribution in virtual multicore systems, providing a more structured basis for evaluating parallel performance. Methods include analytically computing speedup as a function of the number of cores and the parallelizable fraction under different scheduling strategies, with comparisons across workload conditions. Results show that combining DLT with speedup laws and virtual core design offers a deeper and more structured approach for analytical parallel system evaluation. While the analysis remains theoretical, the proposed framework establishes a mathematical foundation for future empirical validation, heterogeneous workload modeling, and sensitivity analysis.
HPC Cluster Task Prediction Based on Multimodal Temporal Networks with Hierarchical Attention Mechanism
In recent years, the increasing adoption of High-Performance Computing (HPC) clusters in scientific research and engineering has exposed challenges such as resource imbalance, node idleness, and overload, which hinder scheduling efficiency. Accurate multidimensional task prediction remains a key bottleneck. To address this, we propose a hybrid prediction model that integrates Informer, Long Short-Term Memory (LSTM), and Graph Neural Networks (GNN), enhanced by a hierarchical attention mechanism combining multi-head self-attention and cross-attention. The model captures both long- and short-term temporal dependencies and deep semantic relationships across features. Built on a multitask learning framework, it predicts task execution time, CPU usage, memory, and storage demands with high accuracy. Experiments show prediction accuracies of 89.9%, 87.9%, 86.3%, and 84.3% on these metrics, surpassing baselines like Transformer-XL. The results demonstrate that our approach effectively models complex HPC workload dynamics, offering robust support for intelligent cluster scheduling and holding strong theoretical and practical significance.
When Computers Were Human
Before Palm Pilots and iPods, PCs and laptops, the term \"computer\" referred to the people who did scientific calculations by hand. These workers were neither calculating geniuses nor idiot savants but knowledgeable people who, in other circumstances, might have become scientists in their own right. When Computers Were Human represents the first in-depth account of this little-known, 200-year epoch in the history of science and technology. Beginning with the story of his own grandmother, who was trained as a human computer, David Alan Grier provides a poignant introduction to the wider world of women and men who did the hard computational labor of science. His grandmother's casual remark, \"I wish I'd used my calculus,\" hinted at a career deferred and an education forgotten, a secret life unappreciated; like many highly educated women of her generation, she studied to become a human computer because nothing else would offer her a place in the scientific world. The book begins with the return of Halley's comet in 1758 and the effort of three French astronomers to compute its orbit. It ends four cycles later, with a UNIVAC electronic computer projecting the 1986 orbit. In between, Grier tells us about the surveyors of the French Revolution, describes the calculating machines of Charles Babbage, and guides the reader through the Great Depression to marvel at the giant computing room of the Works Progress Administration. When Computers Were Human is the sad but lyrical story of workers who gladly did the hard labor of research calculation in the hope that they might be part of the scientific community. In the end, they were rewarded by a new electronic machine that took the place and the name of those who were, once, the computers.
ILP-Based and Heuristic Scheduling Techniques for Variable-Cycle Approximate Functional Units in High-Level Synthesis
Approximate computing is a promising approach to the design of area–power-performance-efficient circuits for computation error-tolerant applications such as image processing and machine learning. Approximate functional units, such as approximate adders and approximate multipliers, have been actively studied for the past decade, and some of these approximate functional units can dynamically change the degree of computation accuracy. The greater their computational inaccuracy, the faster they are. This study examined the high-level synthesis of approximate circuits that take advantage of such accuracy-controllable functional units. Scheduling methods based on integer linear programming (ILP) and list scheduling were proposed. Under resource and time constraints, the proposed method tries to minimize the computation error of the output value by selectively multi-cycling operations. Operations that have a large impact on the output accuracy are multi-cycled to perform exact computing, whereas operations with a small impact on the accuracy are assigned a single cycle for approximate computing. In the experiments, we explored the trade-off between performance, hardware cost, and accuracy to demonstrate the effectiveness of this work.
Heterogeneous Computing Power Scheduling Method Based on Distributed Deep Reinforcement Learning in Cloud-Edge-End Environments
With the rapid development of power Internet of Things (IoT) scenarios such as smart factories and smart homes, numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency. Multi-access edge computing technology deploys cloud computing capabilities at the network edge; constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability. Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations, leading to insufficient adaptability of the model in a heterogeneous dynamic environment. Thus, this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process (POMDP) and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios. It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states. Moreover, by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution, directional exploration and strategy optimization of high-value trajectories are realized. Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.
Resource Scheduling Algorithm for Edge Computing Networks Based on Multi-Objective Optimization
Edge computing networks represent an emerging technological paradigm that enhances real-time responsiveness for mobile devices by reallocating computational resources from central servers to the network’s edge. This shift enables more efficient computing services for mobile devices. However, deploying computing services on inappropriate edge nodes can result in imbalanced resource utilization within edge computing networks, ultimately compromising service efficiency. Consequently, effectively leveraging the resources of edge computing devices while minimizing the energy consumption of terminal devices has become a critical issue in resource scheduling for edge computing. To tackle these challenges, this paper proposes a resource scheduling algorithm for edge computing networks based on multi-objective optimization. This approach utilizes the entropy weight method to assess both dynamic and static metrics of edge computing nodes, integrating them into a unified computing power metric for each node. This integration facilitates a better alignment between computing power and service demands. By modeling the resource scheduling problem in edge computing networks as a multi-objective Markov decision process (MOMDP), this study employs multi-objective reinforcement learning (MORL) and the proximal policy optimization (PPO) algorithm to concurrently optimize task transmission latency and energy consumption in dynamic environments. Finally, simulation experiments demonstrate that the proposed algorithm outperforms state-of-the-art scheduling algorithms in terms of latency, energy consumption, and overall reward. Additionally, it achieves an optimal hypervolume and Pareto front, effectively balancing the trade-off between task transmission latency and energy consumption in multi-objective optimization scenarios.
Optimal resource management and allocation for autonomous-vehicle-infrastructure cooperation under mobile edge computing
PurposeWith the continuous technological development of automated driving and expansion of its application scope, the types of on-board equipment continue to be enriched and the computing capabilities of on-board equipment continue to increase and corresponding applications become more diverse. As the applications need to run on on-board equipment, the requirements for the computing capabilities of on-board equipment become higher. Mobile edge computing is one of the effective methods to solve practical application problems in automated driving.Design/methodology/approachIn this study, in accordance with practical requirements, this paper proposed an optimal resource management allocation method of autonomous-vehicle-infrastructure cooperation in a mobile edge computing environment and conducted an experiment in practical application.FindingsThe design of the road-side unit module and its corresponding real-time operating system task coordination in edge computing are proposed in the study, as well as the method for edge computing load integration and heterogeneous computing. Then, the real-time scheduling of highly concurrent computation tasks, adaptive computation task migration method and edge server collaborative resource allocation method is proposed. Test results indicate that the method proposed in this study can greatly reduce the task computing delay, and the power consumption generally increases with the increase of task size and task complexity.Originality/valueThe results showed that the proposed method can achieve lower power consumption and lower computational overhead while ensuring the quality of service for users, indicating a great application prospect of the method.
Towards Multi-Satellite Collaborative Computing via Task Scheduling Based on Genetic Algorithm
With satellite systems rapidly developing in multiple satellites, multiple tasks, and high-speed response speed requirements, existing computing techniques face the following challenges: insufficient computing power, limited computing resources, and weaker coordination ability. Meanwhile, most methods have more significant response speed and resource utilization limitations. To solve the above problem, we propose a distributed collaborative computing framework with a genetic algorithm-based task scheduling model (DCCF-GA), which can realize the collaborative computing between multiple satellites through genetic algorithm. Specifically, it contains two aspects of work. First, a distributed architecture of satellites is constructed where the main satellite is responsible for distribution and scheduling, and the computing satellite is accountable for completing the task. Then, we presented a genetic algorithm-based task scheduling model that enables multiple satellites to collaborate for completing the tasks. Experiments show that the proposed algorithm has apparent advantages in completion time and outperforms other algorithms in resource efficiency.
Mathematical and Intellectual Innovation in Higher Education Talent Training Models
The economy and society have entered a highly transformative digital age. Digital Intelligence promotes the change of social and economic forms and promotes the integration and innovation of technology, economy and society. In this paper, we formulate the goal of training talents with digital intelligence thinking and the model of training talents with digital intelligence thinking, and propose a strategy for implementing talent training. Using big data technology (cloud computing resource scheduling method, virtual scene dynamic splicing technology) to assist in the implementation of teaching strategies. Conduct simulation experiments to analyze the performance of the cloud resource scheduling algorithm proposed in this paper on different numbers of virtual machines. Evaluate the effect of virtual scene splicing technology on image alignment. Discuss the influences factors from the three aspects of cultivation objectives, curriculum arrangement, and scientific research training in light of the satisfaction of the implementation of the mathematical and intellectual talent cultivation model. The cultivation objectives for numerical intelligence ability are viewed by 40.61% of the students as average, and 13.75% of them feel completely satisfied. Only 4.31% of the students agreed on scientific research and innovation. It can be seen that most of the students did not carry out any interdisciplinary research in depth. It shows that the implementation of the strategy for cultivating mathematical and intellectual talents still needs improvement and strengthening.