Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
39,873 result(s) for "Computing costs"
Sort by:
Cloud computing deployment: a cost-modelling case-study
There are wide range of cloud services commercially available.However there is limited research that investigates the strengths and weaknesses of their cost models in relation to different types of usage requirements. We propose a new costing model that systematically evaluates cloud services,and which combines compute, disk storage, and memory requirements. This paper demonstrates the proposed costing model on a data set that was derived from a real-world industrial data centre workload by calculating the precise cost of service provision from two leading cloud providers.
An efficient reversible data hiding method for AMBTC compressed images
Analyzing multimedia data in mobile devices is often constrained by limited computing capacity and power storage. Therefore, more and more studies are trying to investigate methods with algorithm efficiency. Sun et al. proposed a low computing cost reversible data hiding method for absolute moment block truncation coding (AMBTC) images with excellent embedding performance. Their method predicts quantization values and uses encrypted data bits, division information, and prediction errors to construct the stego codes. This method successfully embeds data while providing a comparable bit-rate; however, it does not fully exploit the correlation of neighboring pixels and division of prediction error for better embedment. Therefore, the payload and bit-rate are penalized because the embedding performance directly depends on the prediction accuracy and division efficiency. In this paper, we use median edge detection predictor to better predict the quantization values. We also employ an alternative prediction technique to increase the prediction accuracy by narrowing the range of prediction values. Besides, an efficient centralized error diversion technique is proposed to further decrease the bit-rate. The experimental results show that the proposed method offers 8 % higher payload with 5 % lower bit-rate on average if compared to Sun et al.’s method and has better embedding performance than prior related works.
What Limits the Simulation of Quantum Computers?
An ultimate goal of quantum computing is to perform calculations beyond the reach of any classical computer. It is therefore imperative that useful quantum computers be very difficult to simulate classically, otherwise classical computers could be used for the applications envisioned for the quantum ones. Perfect quantum computers are unarguably exponentially difficult to simulate: the classical resources required grow exponentially with the number of qubitsNor the depthDof the circuit. This difficulty has triggered recent experiments on deep, random circuits that aim to demonstrate that quantum devices may already perform tasks beyond the reach of classical computing. These real quantum computing devices, however, suffer from many sources of decoherence and imprecision which limit the degree of entanglement that can actually be reached to a fraction of its theoretical maximum. They are characterized by an exponentially decaying fidelityF∼(1−ε)NDwith an error rateεper operation as small as≈1%for current devices with several dozen qubits or even smaller for smaller devices. In this work, we provide new insight on the computing capabilities of real quantum computers by demonstrating that they can be simulated at a tiny fraction of the cost that would be needed for a perfect quantum computer. Our algorithms compress the representations of quantum wave functions using matrix product states, which are able to capture states with low to moderate entanglement very accurately. This compression introduces a finite error rateεso that the algorithms closely mimic the behavior of real quantum computing devices. The computing time of our algorithm increases only linearly withNandDin sharp contrast with exact simulation algorithms. We illustrate our algorithms with simulations of random circuits for qubits connected in both one- and two-dimensional lattices. We find thatεcan be decreased at a polynomial cost in computing power down to a minimum errorε∞. Getting belowε∞requires computing resources that increase exponentially withε∞/ε. For a two-dimensional array ofN=54qubits and a circuit with control-Z gates, error rates better than state-of-the-art devices can be obtained on a laptop in a few hours. For more complex gates such as a swap gate followed by a controlled rotation, the error rate increases by a factor 3 for similar computing time. Our results suggest that, despite the high fidelity reached by quantum devices, only a tiny fraction(∼10−8)of the system Hilbert space is actually being exploited.
Phase transitions in random circuit sampling
Undesired coupling to the surrounding environment destroys long-range correlations in quantum processors and hinders coherent evolution in the nominally available computational space. This noise is an outstanding challenge when leveraging the computation power of near-term quantum processors 1 . It has been shown that benchmarking random circuit sampling with cross-entropy benchmarking can provide an estimate of the effective size of the Hilbert space coherently available 2 – 8 . Nevertheless, quantum algorithms’ outputs can be trivialized by noise, making them susceptible to classical computation spoofing. Here, by implementing an algorithm for random circuit sampling, we demonstrate experimentally that two phase transitions are observable with cross-entropy benchmarking, which we explain theoretically with a statistical model. The first is a dynamical transition as a function of the number of cycles and is the continuation of the anti-concentration point in the noiseless case. The second is a quantum phase transition controlled by the error per cycle; to identify it analytically and experimentally, we create a weak-link model, which allows us to vary the strength of the noise versus coherent evolution. Furthermore, by presenting a random circuit sampling experiment in the weak-noise phase with 67 qubits at 32 cycles, we demonstrate that the computational cost of our experiment is beyond the capabilities of existing classical supercomputers. Our experimental and theoretical work establishes the existence of transitions to a stable, computationally complex phase that is reachable with current quantum processors. By implementing random circuit sampling, experimental and theoretical results establish the existence of transitions to a stable, computationally complex phase that is reachable with current quantum processors.
Efficient calculation of carrier scattering rates from first principles
The electronic transport behaviour of materials determines their suitability for technological applications. We develop a computationally efficient method for calculating carrier scattering rates of solid-state semiconductors and insulators from first principles inputs. The present method extends existing polar and non-polar electron-phonon coupling, ionized impurity, and piezoelectric scattering mechanisms formulated for isotropic band structures to support highly anisotropic materials. We test the formalism by calculating the electronic transport properties of 23 semiconductors, including the large 48 atom CH 3 NH 3 PbI 3 hybrid perovskite, and comparing the results against experimental measurements and more detailed scattering simulations. The Spearman rank coefficient of mobility against experiment ( r s  = 0.93) improves significantly on results obtained using a constant relaxation time approximation ( r s  = 0.52). We find our approach offers similar accuracy to state-of-the art methods at approximately 1/500th the computational cost, thus enabling its use in high-throughput computational workflows for the accurate screening of carrier mobilities, lifetimes, and thermoelectric power. It is difficult to compute the transport properties of a broad array of complex materials both accurately and inexpensively. Here, the authors develop a computationally efficient method for calculating carrier scattering rates of semiconductors, with good accuracy but low cost.
Low-Depth Quantum Simulation of Materials
Quantum simulation of the electronic structure problem is one of the most researched applications of quantum computing. The majority of quantum algorithms for this problem encode the wavefunction usingNGaussian orbitals, leading to Hamiltonians withO(N4)second-quantized terms. We avoid this overhead and extend methods to condensed phase materials by utilizing a dual form of the plane wave basis which diagonalizes the potential operator, leading to a Hamiltonian representation withO(N2)second-quantized terms. Using this representation, we can implement single Trotter steps of the Hamiltonians with linear gate depth on a planar lattice. Properties of the basis allow us to deploy Trotter- and Taylor-series-based simulations with respective circuit depths ofO(N7/2)andO˜(N8/3)for fixed charge densities. Variational algorithms also require significantly fewer measurements in this basis, ameliorating a primary challenge of that approach. While our approach applies to the simulation of arbitrary electronic structure problems, the basis sets explored in this work will be most practical for treating periodic systems, such as crystalline materials, in the near term. We conclude with a proposal to simulate the uniform electron gas (jellium) using a low-depth variational ansatz realizable on near-term quantum devices. From these results, we identify simulations of low-density jellium as a promising first setting to explore quantum supremacy in electronic structure.
LDPSR: a super-resolution network featuring the lightweight duplication plugin
A new super-resolution (SR) technique by CNNs is introduced: Lightweight Duplication Super Resolution Network (LDPSR). This network achieves performance similar to mainstream non-lightweight network networks while maintaining a lower calculation cost and number of parameters, and has specially designed a Lightweight Duplication Plugin (LDP), which only generates addition operations without increasing the burden of multiplication operations, effectively improving the computational performance of the network. This plugin greatly reduces the network size by segmenting input images and expanding them separately to avoid increasing the parameter count. The network architecture includes a shallow part, a deep part, and an up-sampling part. By combining special convolution modules and lightweight plugins, the diversity of features is enhanced while controlling the parameters and costs of computational. This study provides a new network architecture and computing method that can achieve efficient SR on lightweight devices with lower computational costs and parameter requirements, to achieve the practical application value of SR technology.
Machine learning–accelerated computational fluid dynamics
Numerical simulation of fluids plays an essential role in modeling many physical phenomena, such as weather, climate, aerodynamics, and plasma physics. Fluids are well described by the Navier–Stokes equations, but solving these equations at scale remains daunting, limited by the computational cost of resolving the smallest spatiotemporal features. This leads to unfavorable tradeoffs between accuracy and tractability. Here we use end-to-end deep learning to improve approximations inside computational fluid dynamics for modeling two-dimensional turbulent flows. For both direct numerical simulation of turbulence and large-eddy simulation, our results are as accurate as baseline solvers with 8 to 10× finer resolution in each spatial dimension, resulting in 40- to 80-fold computational speedups. Our method remains stable during long simulations and generalizes to forcing functions and Reynolds numbers outside of the flows where it is trained, in contrast to black-box machine-learning approaches. Our approach exemplifies how scientific computing can leverage machine learning and hardware accelerators to improve simulations without sacrificing accuracy or generalization.
Estimating economic damage from climate change in the United States
Estimates of climate change damage are central to the design of climate policies. Here, we develop a flexible architecture for computing damages that integrates climate science, econometric analyses, and process models. We use this approach to construct spatially explicit, probabilistic, and empirically derived estimates of economic damage in the United States from climate change. The combined value of market and nonmarket damage across analyzed sectors—agriculture, crime, coastal storms, energy, human mortality, and labor—increases quadratically in global mean temperature, costing roughly 1.2% of gross domestic product per +1°C on average. Importantly, risk is distributed unequally across locations, generating a large transfer of value northward and westward that increases economic inequality. By the late 21st century, the poorest third of counties are projected to experience damages between 2 and 20% of county income (90% chance) under business-as-usual emissions (Representative Concentration Pathway 8.5).