Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
168,603
result(s) for
"Processor"
Sort by:
A batch scheduling model for a three-stage flow shop with job and batch processors considering a sampling inspection to minimize expected total actual flow time
by
Sukoyo, Sukoyo
,
Suprayogi, Suprayogi
,
Halim, Abdul Hakim
in
a three-stage flow shop
,
Actual flowtime
,
actual flowtime, batch scheduling, a three-stage flow shop, job processor, batch processor, sampling inspection
2021
Purpose: This research develops a batch scheduling model for a three-stage flow shop with job processors in the first and second stages and a batch processor in the third stage. The model integrates production process activities and a product inspection activity to minimize the expected total actual flow time. Design/methodology/approach: The problem of batch scheduling for a three-stage flow shop is formulated as a mathematical model, and a heuristic algorithm is proposed to solve the problem. This model applies backward scheduling to accommodate the objective of minimizing the expected total actual flow time. Findings: This research has proposed a batch scheduling model for a three-stage flow shop with job and batch processors to produce multiple items and an algorithm to solve the model. The objective is to minimize total actual time. The resulting production batches can be sequenced between all types of products to minimize idle time, and the batch processor capacity affects the sample size and indirectly affects the production batch size. Originality/value: This research develops a batch scheduling model for a three-stage flow shop constituting job and batch processors and carrying out integrated production and inspection activities to minimize the expected total actual flow time.
Journal Article
Enhancing Solar Convection Analysis With Multi‐Core Processors and GPUs
by
Navimipour, Nima Jafari
,
Jamali, Mohammad Ali Jabraeil
,
Heidari, Arash
in
Adaptability
,
Algorithms
,
Astronomy
2025
ABSTRACT
In the realm of astrophysical numerical calculations, the demand for enhanced computing power is imperative. The time‐consuming nature of calculations, particularly in the domain of solar convection, poses a significant challenge for Astrophysicists seeking to analyze new data efficiently. Because they let different kinds of data be worked on separately, parallel algorithms are a good way to speed up this kind of work. A lot of this study is about how to use both multi‐core computers and GPUs to do math work about solar energy at the same time. Cutting down on the time it takes to work with data is the main goal. This way, new data can be looked at more quickly and without having to practice for a long time. It works well when you do things in parallel, especially when you use GPUs for 3D tasks, which speeds up the work a lot. This is proof of how important it is to adjust the parallelization methods based on the size of the numbers. But for 2D math, computers with more than one core work better. The results not only fix bugs in models of solar convection, but they also show that speed changes a little based on the gear and how it is processed.
Optimizing Solar Calculations: The Role of Parallel Algorithms and GPUs in Astrophysical Research.
Journal Article
Dung beetle optimizer: a new meta-heuristic algorithm for global optimization
2023
In this paper, a novel population-based technique called dung beetle optimizer (DBO) algorithm is presented, which is inspired by the ball-rolling, dancing, foraging, stealing, and reproduction behaviors of dung beetles. The newly proposed DBO algorithm takes into account both the global exploration and the local exploitation, thereby having the characteristics of the fast convergence rate and the satisfactory solution accuracy. A series of well-known mathematical test functions (including both 23 benchmark functions and 29 CEC-BC-2017 test functions) are employed to evaluate the search capability of the DBO algorithm. From the simulation results, it is observed that the DBO algorithm presents substantially competitive performance with the state-of-the-art optimization approaches in terms of the convergence rate, solution accuracy, and stability. In addition, the Wilcoxon signed-rank test and the Friedman test are used to evaluate the experimental results of the algorithms, which proves the superiority of the DBO algorithm against other currently popular optimization techniques. In order to further illustrate the practical application potential, the DBO algorithm is successfully applied in three engineering design problems. The experimental results demonstrate that the proposed DBO algorithm can effectively deal with real-world application problems.
Journal Article
A Methodology for the Synthesis of E-Commerce
2019
Recent advances in real-time communication and relational modalities offer a viable alternative to the producer-consumer problem. After years of theoretical research into the partition table, we show the refinement of the location-identity split, which embodies the robust principles of operating sys-tems. In order to fix this issue, we propose new scalable archetypes (Gunnel), demonstrating that multi-processors and multi-processors are never incompatible.
Journal Article
Topological analog signal processing
by
Zangeneh-Nejad, Farzad
,
Fleury, Romain
in
639/301/119/2792
,
639/766/25/3927
,
Differential equations
2019
Analog signal processors have attracted a tremendous amount of attention recently, as they potentially offer much faster operation and lower power consumption than their digital versions. Yet, they are not preferable for large scale applications due to the considerable observational errors caused by their excessive sensitivity to environmental and structural variations. Here, we demonstrate both theoretically and experimentally the unique relevance of topological insulators for alleviating the unreliability of analog signal processors. In particular, we achieve an important signal processing task, namely resolution of linear differential equations, in an analog system that is protected by topology against large levels of disorder and geometrical perturbations. We believe that our strategy opens up large perspectives for a new generation of robust all-optical analog signal processors, which can now not only perform ultrafast, high-throughput, and power efficient signal processing tasks, but also compete with their digital counterparts in terms of reliability and flexibility.
Analog signal processors could potentially offer faster operation and lower power consumption than digital versions, but are not yet commonly used for large scale applications due to considerable observational errors. Here, the authors demonstrate the unique relevance of topological insulators for improving reliability in such analog processors.
Journal Article
Intensive computing on a large data volume with a short-vector single instruction multiple data processor
by
Ungurean, Ioan
,
Gaitan, Vasile-Gheorghita
,
Gaitan, Nicoleta-Cristina
in
Advanced manufacturing technologies
,
Algorithms
,
Applied sciences
2014
In this study, the authors want to evaluate the performances of the PowerXCell 8i processor, which is based on Cell Broadband Engine architecture. For this purpose, the authors chose an algorithm for the k-nearest neighbour problem. The authors optimised this algorithm for efficient exploitation of the facilities provided by this architecture. The authors evaluated the PowerXCell 8i performances by algorithm execution with single- and double-precision calculations. For both cases, the performances were evaluated with and without SIMDisation. For single-precision calculations, the authors achieved a maximum speed-up of 43.85 with SIMDisation by activating 6 synergetic processor element (SPE) processors and 39.73 without SIMDisation by activating 16 SPE processors. For double-precision calculations, the authors achieved a maximum speed-up of 34.79 with SIMDisation by activating 9 SPE processors and 32.71 without SIMDisation by activating 12 SPE processors. These values related to the execution on the PowerPC processor element processor and are due to the accessing way of the main memory by the SPE cores, through the DMA transfers who are performed in parallel with the computing operations. The authors conclude that this process can be efficiently used for the execution of algorithms that require intensive computations on huge data volume.
Journal Article
A Programmable Crypto-Processor for National Institute of Standards and Technology Post-Quantum Cryptography Standardization Based on the RISC-V Architecture
2023
The advancement of quantum computing threatens the security of conventional public-key cryptosystems. Post-quantum cryptography (PQC) was introduced to ensure data confidentiality in communication channels, and various algorithms are being developed. The National Institute of Standards and Technology (NIST) has initiated PQC standardization, and the selected algorithms for standardization and round 4 candidates were announced in 2022. Due to the large memory footprint and highly repetitive operations, there have been numerous attempts to accelerate PQC on both hardware and software. This paper introduces the RISC-V instruction set extension for NIST PQC standard algorithms and round 4 candidates. The proposed programmable crypto-processor can support a wide range of PQC algorithms with the extended RISC-V instruction set and demonstrates significant reductions in code size, the number of executed instructions, and execution cycle counts of target operations in PQC algorithms of up to 79%, 92%, and 87%, respectively, compared to RV64IM with optimization level 3 (-O3) in the GNU toolchain.
Journal Article
Efficient Variational Quantum Simulator Incorporating Active Error Minimization
2017
One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.
Journal Article
Photonic processor benchmarking for variational quantum process tomography
2025
We present a quantum-analogous experimental demonstration of variational quantum process tomography using an optical processor. This approach leverages classical one-hot encoding and unitary decomposition to perform the variational quantum algorithm on a photonic platform. We create the first benchmark for variational quantum process tomography evaluating the performance of the quantum-analogous experiment on the optical processor against several publicly accessible quantum computing platforms, including IBM’s 127-qubit Sherbrooke processor, QuTech’s five-qubit Tuna-5 processor, and Quandela’s 12-mode Ascella quantum optical processor. We evaluate each method using process fidelity, cost function convergence, and processing time per iteration for variational quantum circuit depths of d = 3 and d = 6. Our results indicate that the optical processors outperform their superconducting counterparts in terms of fidelity and convergence behavior reaching fidelities of 0.8 after nine iterations, particularly at higher depths, where the noise of decoherence and dephasing affect the superconducting processors significantly. We further investigate the influence of any additional quantum optical effects in our platform relative to the classical one-hot encoding. From the process fidelity results it shows that the (classical) thermal noise in the phase-shifters dominates over other optical imperfections, such as mode mismatch and dark counts from single-photon sources. The benchmarking framework and experimental results demonstrate that photonic processors are strong contenders for near-term quantum algorithm deployment, particularly in hybrid variational contexts. This analysis is valuable not only for state and process tomography but also for a wide range of applications involving variational quantum circuit based algorithms.
Journal Article