Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
99 result(s) for "NP-complete problem"
Sort by:
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons
Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.
Interference-free walks in time: temporally disjoint paths
We investigate the computational complexity of finding temporally disjoint paths and walks in temporal graphs. There, the edge set changes over discrete time steps. Temporal paths and walks use edges that appear at monotonically increasing time steps. Two paths (or walks) are temporally disjoint if they never visit the same vertex at the same time; otherwise, they interfere. This reflects applications in robotics, traffic routing, or finding safe pathways in dynamically changing networks. At one extreme, we show that on general graphs the problem is computationally hard. The path version is NP-hard even if we want to find only two temporally disjoint paths. The walk version is W-hard (Klobas in IJCAI 4090–4096, 2021) when parameterized by the number of walks. However, it is polynomial-time solvable for any constant number of walks. At the other extreme, restricting the input temporal graph to have a path as underlying graph, quite counter-intuitively, we find NP-hardness in general but also identify natural tractable cases.
As good as it gets: a scaling comparison of DNA computing, network biocomputing, and electronic computing approaches to an NP-complete problem
All known algorithms to solve nondeterministic polynomial (NP) complete problems, relevant to many real-life applications, require the exploration of a space of potential solutions, which grows exponentially with the size of the problem. Since electronic computers can implement only limited parallelism, their use for solving NP-complete problems is impractical for very large instances, and consequently alternative massively parallel computing approaches were proposed to address this challenge. We present a scaling analysis of two such alternative computing approaches, DNA computing (DNA-C) and network biocomputing with agents (NB-C), compared with electronic computing (E-C). The Subset Sum Problem (SSP), a known NP-complete problem, was used as a computational benchmark, to compare the volume, the computing time, and the energy required for each type of computation, relative to the input size. Our analysis shows that the sequentiality of E-C translates in a very small volume compared to that required by DNA-C and NB-C, at the cost of the E-C computing time being outperformed first by DNA-C (linear run time), followed by NB-C. Finally, NB-C appears to be more energy-efficient than DNA-C for some types of input sets, while being less energy-efficient for others, with E-C being always an order of magnitude less energy efficient than DNA-C. This scaling study suggest that presently none of these computing approaches win, even theoretically, for all three key performance criteria, and that all require breakthroughs to overcome their limitations, with potential solutions including hybrid computing approaches.
Faster than classical quantum algorithm for dense formulas of exact satisfiability and occupation problems
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O ( 2 n − M ′ ) and O ( 2 n − M ′ ) , where n is the number of variables and M ′ the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O ( 2 n / 4 ) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for ( 3 , 3 ) -regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O ( 2 31 n / 96 ) . Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
The golden ticket
The P-NP problem is the most important open problem in computer science, if not all of mathematics.The Golden Ticketprovides a nontechnical introduction to P-NP, its rich history, and its algorithmic implications for everything we do with computers and beyond. In this informative and entertaining book, Lance Fortnow traces how the problem arose during the Cold War on both sides of the Iron Curtain, and gives examples of the problem from a variety of disciplines, including economics, physics, and biology. He explores problems that capture the full difficulty of the P-NP dilemma, from discovering the shortest route through all the rides at Disney World to finding large groups of friends on Facebook. But difficulty also has its advantages. Hard problems allow us to safely conduct electronic commerce and maintain privacy in our online lives. The Golden Ticketexplores what we truly can and cannot achieve computationally, describing the benefits and unexpected challenges of the P-NP problem.
A Dynamic Maintenance Strategy for Multi-Component Systems Using a Genetic Algorithm
In multi-component systems, the components are dependent, rather than degenerating independently, leading to changes in maintenance schedules. In this situation, this study proposes a grouping dynamic maintenance strategy. Considering the structure of multi-component systems, the maintenance strategy is determined according to the importance of the components. The strategy can minimize the expected depreciation cost of the system and divide the system into optimal groups that meet economic requirements. First, multi-component models are grouped. Then, a failure probability model of multi-component systems is established. The maintenance parameters in each maintenance cycle are updated according to the failure probability of the components. Second, the component importance indicator is introduced into the grouping model, and the optimization model, which aimed at a maximum economic profit, is established. A genetic algorithm is used to solve the non-deterministic polynomial (NP)-complete problem in the optimization model, and the optimal grouping is obtained through the initial grouping determined by random allocation. An 11-component series and parallel system is used to illustrate the effectiveness of the proposed strategy, and the influence of the system structure and the parameters on the maintenance strategy is discussed.
Dicke State Quantum Search for Solving the Vertex Cover Problem
This paper proposes a quantum algorithm, named Dicke state quantum search (DSQS), to set qubits in the Dicke state |Dkn⟩ of D states in superposition to locate the target inputs or solutions of specific patterns among 2n unstructured input instances, where n is the number of input qubits and D=nk=O(nk) for min(k,n−k)≪n/2. Compared to Grover’s algorithm, a famous quantum search algorithm that calls an oracle and a diffuser O(2n) times, DSQS requires no diffuser and calls an oracle only once. Furthermore, DSQS does not need to know the number of solutions in advance. We prove the correctness of DSQS with unitary transformations, and show that each solution can be found by DSQS with an error probability less than 1/3 through O(nk) repetitions, as long as min(k,n−k)≪n/2. Additionally, this paper proposes a classical algorithm, named DSQS-VCP, to generate quantum circuits based on DSQS for solving the k-vertex cover problem (k-VCP), a well-known NP-complete (NPC) problem. Complexity analysis demonstrates that DSQS-VCP operates in polynomial time and that the quantum circuit generated by DSQS-VCP has a polynomial qubit count, gate count, and circuit depth as long as min(k,n−k)≪n/2. We thus conclude that the k-VCP can be solved by the DSQS-VCP quantum circuit in polynomial time with an error probability less than 1/3 under the condition of min(k,n−k)≪n/2. Since the k-VCP is NP-complete, NP and NPC problems can be polynomially reduced to the k-VCP. If the reduced k-VCP instance satisfies min(k,n−k)≪n/2, then both the instance and the original NP/NPC problem instance to which it corresponds can be solved by the DSQS-VCP quantum circuit in polynomial time with an error probability less than 1/3. All statements of polynomial algorithm execution time in this paper apply only to VCP instances and similar instances of other problems, where min(k,n−k)≪n/2. Thus, they imply neither NP ⊆ BQP nor P = NP. In this restricted regime of min(k,n−k)≪n/2, the Dicke state subspace has a polynomial size of D=nk=O(nk), and our DSQS algorithm samples from it without asymptotic superiority over exhaustive enumeration. Nevertheless, DSQS may be combined with other quantum algorithms to better exploit the strengths of quantum computation in practice. Experimental results using IBM Qiskit packages show that the DSQS-VCP quantum circuit can solve the k-VCP successfully.
Physical requirements for scaling up network-based biocomputation
The high energy consumption of electronic data processors, together with physical challenges limiting their further improvement, has triggered intensive interest in alternative computation paradigms. Here we focus on network-based biocomputation (NBC), a massively parallel approach where computational problems are encoded in planar networks implemented with nanoscale channels. These networks are explored by biological agents, such as biological molecular motor systems and bacteria, benefitting from their energy efficiency and availability in large numbers. We analyse and define the fundamental requirements that need to be fulfilled to scale up NBC computers to become a viable technology that can solve large NP-complete problem instances faster or with less energy consumption than electronic computers. Our work can serve as a guide for further efforts to contribute to elements of future NBC devices, and as the theoretical basis for a detailed NBC roadmap.
Multi-objective optimization of the integrated problem of location assignment and straddle carrier scheduling in maritime container terminal at import
Maritime terminals need more efficiency in their handling operations due to the phenomenal evolution of world container traffic, and to the increase of the container ship capacity. In this work, we propose a new integrated modeling considering the optimization of maritime container terminals using straddle carriers. The problem is considered at import. We study a combination between two known problems, the first is the storage location assignment problem, and the second is the straddle carrier scheduling problem. This approach, which combines between two chronologically successive problems, leads to the use of multi-objective optimization. In fact, we study the multi-objective integrated problem of location assignment and Straddle carrier Scheduling (IPLASS) in maritime container terminal at import. We prove that the problem is NP-Complete. The objective is to minimize the operating cost which we evaluate according to eight components: the date of last task called makespan, the total vehicle operating time, the total storage bay occupation time, the number of vehicles used, the number of storage bays used, the number of storage locations used, and two different costs of storage location assignment. The location assignment costs are evaluated in order to facilitate the containers transfer for deliveries. We assume that the operating cost is a function of these components and that the influence of each component is variable and dependent on different parameters. These parameters are essentially: the number of quays in the terminal, the straddle carrier traffic layout, the number of container ships to serve in the terminal, the influence of concurrent operations in the terminal, the storage space configuration, the number of free storage bays, the number of free straddle carriers, the number of free quay cranes, the mobility of quay cranes, etc. To solve IPLASS efficiently, we propose an adapted multi-objective Tabu Search algorithm. Lower-bound evaluations are introduced to perform approximation of Pareto Front. To explore efficiently the non-convex Pareto Front Region, we evaluate also a maximized distance adapted to the set of objectives. Indicators of efficiency are developed to propose distinguished solutions to operator. 2D-projections of approximated Pareto Frontier are given to more understand the efficiency of proposed solutions.
Design of network-based biocomputation circuits for the exact cover problem
Exact cover is a non-deterministic polynomial time (NP)—complete problem that is central to optimization challenges such as airline fleet planning and allocation of cloud computing resources. Solving exact cover requires the exploration of a solution space that increases exponentially with cardinality. Hence, it is time- and energy consuming to solve large instances of exact cover by serial computers. One approach to address these challenges is to utilize the inherent parallelism and high energy efficiency of biological systems in a network-based biocomputation (NBC) device. NBC is a parallel computing paradigm in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. The network is then explored in parallel using a large number of biological agents, such as molecular-motor-propelled protein filaments. The answer to the combinatorial problem can then be inferred by measuring the positions through which the agents exit the network. Here, we (i) show how exact cover can be encoded and solved in an NBC device, (ii) define a formalization that allows to prove the correctness of our approach and provides a mathematical basis for further studying NBC, and (iii) demonstrate various optimizations that significantly improve the computing performance of NBC. This work lays the ground for fabricating and scaling NBC devices to solve significantly larger combinatorial problems than have been demonstrated so far.