Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
7,992
result(s) for
"time complexity"
Sort by:
Cognitive science and artificial intelligence: simulating the human mind and its complexity
2019
This study encompassed around the interdisciplinary study of cognitive science in the field of artificial intelligence. Past as well as current areas of research have been highlighted such that better understating of the topic can be ensured. Furthermore, some of the present‐day applications of cognitive science artificial intelligence have been discussed as these can be considered as the foundation for further improvement. Prior to discussion about future scopes, real‐time complexities have been revealed.
Journal Article
The time complexity of self-assembly
by
Gartner, Florian M.
,
Frey, Erwin
,
Graf, Isabella R.
in
Algorithms
,
Biological activity
,
Biophysics and Computational Biology
2022
Time efficiency of self-assembly is crucial for many biological processes. Moreover, with the advances of nanotechnology, time efficiency in artificial self-assembly becomes ever more important. While structural determinants and the final assembly yield are increasingly well understood, kinetic aspects concerning the time efficiency, however, remain much more elusive. In computer science, the concept of time complexity is used to characterize the efficiency of an algorithm and describes how the algorithm’s run-time depends on the size of the input data. Here we characterize the time complexity of nonequilibrium self-assembly processes by exploring how the time required to realize a certain, substantial yield of a given target structure scales with its size. We identify distinct classes of assembly scenarios, i.e., “algorithms” to accomplish this task, and show that they exhibit drastically different degrees of complexity. Our analysis enables us to identify optimal control strategies for nonequilibrium self-assembly processes. Furthermore, we suggest an efficient irreversible scheme for the artificial self-assembly of nanostructures, which complements the state-of-the-art approach using reversible binding reactions and requires no fine-tuning of binding energies.
Journal Article
Low-time complexity and low-cost binary particle swarm optimization algorithm for task scheduling and load balancing in cloud computing
by
Kong, Lingfu
,
Chen, Zhen
,
Jean Pepe Buanga Mapetu
in
Algorithms
,
Cloud computing
,
Completion time
2019
With the increasing large number of cloud users, the number of tasks is growing exponentially. Scheduling and balancing these tasks amongst different heterogeneous virtual machines (VMs) under constraints such as, low makespan, high resource utilization rate, low execution cost and low scheduling time, become NP-hard optimization problem. So, due to the inefficiency of heuristic algorithms, many meta-heuristic algorithms, such as particle swarm optimization (PSO) have been introduced to solve the said problem. However, these algorithms do not guarantee that the optimal solution can be found, if they are not combined with other heuristic or meta-heuristic algorithms. Further, these algorithms have high time complexity, making them less useful in realistic scenarios. To solve the said NP-problem effectively, we propose an efficient binary version of PSO algorithm with low time complexity and low cost for scheduling and balancing tasks in cloud computing. Specifically, we define an objective function which calculates the maximum completion time difference among heterogeneous VMs subject to updating and optimization constraints introduced in this paper. Then, we devise a particle position updating with respect to load balancing strategy. The experimental results show that the proposed algorithm achieves task scheduling and load balancing better than existing meta-heuristic and heuristic algorithms.
Journal Article
Time expenditure in computer aided time studies implemented for highly mechanized forest equipment
by
Mușat, Elena Camelia
,
Apăfăian, Andrei Ioan
,
Ignea, Gheorghe
in
avix, purpose-designed industrial time studying software, dependence, analyzing and processing time, complexity, computer-aided time studies
2016
Time studies represent important tools that are used in forest operations research to produce empirical models or to comparatively assess the performance of two or more operational alternatives with the general aim to predict the performance of operational behavior, choose the most adequate equipment or eliminate the useless time. There is a long tradition in collecting the needed data in a traditional fashion, but this approach has its limitations, and it is likely that in the future the use of professional software would be extended is such preoccupations as this kind of tools have been already implemented. However, little to no information is available in what concerns the performance of data analyzing tasks when using purpose-built professional time studying software in such research preoccupations, while the resources needed to conduct time studies, including here the time may be quite intensive. Our study aimed to model the relations between the variation of time needed to analyze the video-recorded time study data and the variation of some measured independent variables for a complex organization of a work cycle. The results of our study indicate that the number of work elements which were separated within a work cycle as well as the delay-free cycle time and the software functionalities that were used during data analysis, significantly affected the time expenditure needed to analyze the data (α=0.01, p<0.01). Under the conditions of this study, where the average duration of a work cycle was of about 48 seconds and the number of separated work elements was of about 14, the speed that was usedto replay the video files significantly affected the mean time expenditure which averaged about 273 seconds for half of the real speed and about 192 seconds for an analyzing speed that equaled the real speed. We argue that different study designs as well as the parameters used within the software are likely to produce different results, a fact that should trigger other studies based on variations of these parameters. However, the results of this study give an initial overview on the time resources needed in processing and analyzing the data, and may help researchers in allocating their resources.
Journal Article
Efficient computation of expected hypervolume improvement using box decomposition algorithms
2019
In the field of multi-objective optimization algorithms, multi-objective Bayesian Global Optimization (MOBGO) is an important branch, in addition to evolutionary multi-objective optimization algorithms. MOBGO utilizes Gaussian Process models learned from previous objective function evaluations to decide the next evaluation site by maximizing or minimizing an infill criterion. A commonly used criterion in MOBGO is the Expected Hypervolume Improvement (EHVI), which shows a good performance on a wide range of problems, with respect to exploration and exploitation. However, so far, it has been a challenge to calculate exact EHVI values efficiently. This paper proposes an efficient algorithm for the exact calculation of the EHVI for in a generic case. This efficient algorithm is based on partitioning the integration volume into a set of axis-parallel slices. Theoretically, the upper bound time complexities can be improved from previously \\[O (n^2)\\] and \\[O(n^3)\\], for two- and three-objective problems respectively, to \\[\\varTheta (n\\log n)\\], which is asymptotically optimal. This article generalizes the scheme in higher dimensional cases by utilizing a new hyperbox decomposition technique, which is proposed by Dächert et al. (Eur J Oper Res 260(3):841–855, 2017). It also utilizes a generalization of the multilayered integration scheme that scales linearly in the number of hyperboxes of the decomposition. The speed comparison shows that the proposed algorithm in this paper significantly reduces computation time. Finally, this decomposition technique is applied in the calculation of the Probability of Improvement (PoI).
Journal Article
Time and space complexity of deterministic and nondeterministic decision trees
2023
In this paper, we study arbitrary infinite binary information systems each of which consists of an infinite set called universe and an infinite set of two-valued functions (attributes) defined on the universe. We consider the notion of a problem over information system, which is described by a finite number of attributes and a mapping associating a decision to each tuple of attribute values. As algorithms for problem solving, we use deterministic and nondeterministic decision trees. As time and space complexity, we study the depth and the number of nodes in the decision trees. In the worst case, with the growth of the number of attributes in the problem description, (i) the minimum depth of deterministic decision trees grows either almost as logarithm or linearly, (ii) the minimum depth of nondeterministic decision trees either is bounded from above by a constant or grows linearly, (iii) the minimum number of nodes in deterministic decision trees has either polynomial or exponential growth, and (iv) the minimum number of nodes in nondeterministic decision trees has either polynomial or exponential growth. Based on these results, we divide the set of all infinite binary information systems into five complexity classes, and study for each class issues related to time-space trade-off for decision trees.
Journal Article
Cluster-based Kriging approximation algorithms for complexity reduction
2020
Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a well-defined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.
Journal Article
A Comparative Analysis of Multi-Criteria Decision-Making Methods for Resource Selection in Mobile Crowd Computing
by
Pramanik, Pijush Kanti Dutta
,
Marinković, Dragan
,
Choudhury, Prasenjit
in
Algorithms
,
Cloud computing
,
Comparative analysis
2021
In mobile crowd computing (MCC), smart mobile devices (SMDs) are utilized as computing resources. To achieve satisfactory performance and quality of service, selecting the most suitable resources (SMDs) is crucial. The selection is generally made based on the computing capability of an SMD, which is defined by its various fixed and variable resource parameters. As the selection is made on different criteria of varying significance, the resource selection problem can be duly represented as an MCDM problem. However, for the real-time implementation of MCC and considering its dynamicity, the resource selection algorithm should be time-efficient. In this paper, we aim to find out a suitable MCDM method for resource selection in such a dynamic and time-constraint environment. For this, we present a comparative analysis of various MCDM methods under asymmetric conditions with varying selection criteria and alternative sets. Various datasets of different sizes are used for evaluation. We execute each program on a Windows-based laptop and also on an Android-based smartphone to assess average runtimes. Besides time complexity analysis, we perform sensitivity analysis and ranking order comparison to check the correctness, stability, and reliability of the rankings generated by each method.
Journal Article
Efficient Implementations of Echo State Network Cross-Validation
2023
Cross-Validation (CV) is still uncommon in time series modeling. Echo State Networks (ESNs), as a prime example of Reservoir Computing (RC) models, are known for their fast and precise one-shot learning, that often benefit from good hyper-parameter tuning. This makes them ideal to change the status quo. We discuss CV of time series for predicting a concrete time interval of interest, suggest several schemes for cross-validating ESNs and introduce an efficient algorithm for implementing them. This algorithm is presented as two levels of optimizations of doing
k
-fold CV. Training an RC model typically consists of two stages: (i) running the reservoir with the data and (ii) computing the optimal readouts. The first level of our optimization addresses the most computationally expensive part (i) and makes it remain constant irrespective of
k
. It dramatically reduces reservoir computations in any type of RC system and is enough if
k
is small. The second level of optimization also makes the (ii) part remain constant irrespective of large
k
, as long as the dimension of the output is low. We discuss when the proposed validation schemes for ESNs could be beneficial, three options for producing the final model and empirically investigate them on six different real-world datasets, as well as do empirical computation time experiments. We provide the code in an online repository. Proposed CV schemes give better and more stable test performance in all the six different real-world datasets, three task types. Empirical run times confirm our complexity analysis. In most situations,
k
-fold CV of ESNs and many other RC models can be done for virtually the same time and space complexity as a simple single-split validation. This enables CV to become a standard practice in RC.
Journal Article