Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
31,253 result(s) for "Computing time"
Sort by:
Cloud analytics with Google Cloud Platform : an end-to-end guide to processing and analyzing big data using Google Cloud Platform
\"With the ongoing data explosion, more and more organizations all over the world are slowly migrating their infrastructure to the cloud. These cloud platforms also provide their distinct analytics services to help you get faster insights from your data. This book will give you an introduction to the concept of analytics on the cloud, and the different cloud services popularly used for processing and analyzing data. If you're planning to adopt the cloud analytics model for your business, this book will help you understand the design and business considerations to be kept in mind, and choose the best tools and alternatives for analytics, based on your requirements. The chapters in this book will take you through the 70+ services available in Google Cloud Platform and their implementation for practical purposes. From ingestion to processing your data, this book contains best practices on building an end-to-end analytics pipeline on the cloud by leveraging popular concepts such as machine learning and deep learning. By the end of this book, you will have a better understanding of cloud analytics as a concept as well as a practical know-how of its implementation.\"--Publisher description.
Study of Behavior of Geometric Symmetries of 3D Objects with Digital Fresnel–Kirchhoff Holograms, Using Non-Redundant Calculations
Techniques for producing fast Huygens–Fresnel–Kirchhoff digital holograms using kernel symmetry are studied. This study demonstrates non-linear behavior in computing time, as the sampled area changes with respect to the propagated diffracted area. Given the large amount of data involved in 3D object formation, symmetries are crucial in reducing the computational time. The evaluation of diffraction patterns is implemented to avoid redundant calculations while preserving the precision of the results. This algorithm decreases the required computing time depending on the symmetry of the axes, compared to direct calculation. Interestingly, the reduction in computing time relative to the number of symmetries is not linear. Computing time curves are presented. Some redundant computations are determined by the initial conditions of the object matrix, whether even or odd, along its x and y axes. Diagonal symmetries possess intrinsic redundancy along their axes. The rotation of the image must align with the rotation of the geometric coordinates in each section to ensure accurate calculations.
Phase media : space, time and the politics of smart objects
James Ash theorizes how smart objects, understood as Internet-connected and sensor-enabled devices, are altering users' experience of their environment. Rather than networks connected by lines of transmission, smart objects generate phases, understood as space-times that modulate the spatio-temporal intelligibility of both humans and non-humans. Examining a range of objects and services from the Apple Watch to Nest Cam to Uber, Ash suggests that the modulation of spatio-temporal intelligibility is partly shaped by the commercial logics of the industries that design and manufacture smart objects, but can also exceed them. Drawing upon the work of Martin Heidegger, Gilbert Simondon and Bruno Latour, Ash argues that smart objects have their own phase politics, which offer opportunities for new forms of public to emerge. Phase Media develops a conceptual vocabulary to contend that smart objects do more than just enabling a world of increased corporate control and surveillance, as they also provide the tools to expose and re-order the very logics and procedures that created them.
Choosing the location of the source points of MFS by effective condition number for the interior scattering problem
This paper explores the application of the method of fundamental solutions (MFS) for addressing the interior scattering problem of a cavity. In the implementation of this method, determining the optimal placement of virtual source points outside the computational domain remains a critical challenge. Employing the effective condition number as a tool, this study selects the expansion factor corresponding to the maximum value of the effective condition number to determine the placement of the source points. Numerical experiments have demonstrated the effectiveness of this approach and have provided a comparison with the leave-one-out cross-validation (LOOCV) method. The findings indicate that the effective condition number algorithm offers superior accuracy and reduces computational time with an equivalent number of source points.
What Limits the Simulation of Quantum Computers?
An ultimate goal of quantum computing is to perform calculations beyond the reach of any classical computer. It is therefore imperative that useful quantum computers be very difficult to simulate classically, otherwise classical computers could be used for the applications envisioned for the quantum ones. Perfect quantum computers are unarguably exponentially difficult to simulate: the classical resources required grow exponentially with the number of qubitsNor the depthDof the circuit. This difficulty has triggered recent experiments on deep, random circuits that aim to demonstrate that quantum devices may already perform tasks beyond the reach of classical computing. These real quantum computing devices, however, suffer from many sources of decoherence and imprecision which limit the degree of entanglement that can actually be reached to a fraction of its theoretical maximum. They are characterized by an exponentially decaying fidelityF∼(1−ε)NDwith an error rateεper operation as small as≈1%for current devices with several dozen qubits or even smaller for smaller devices. In this work, we provide new insight on the computing capabilities of real quantum computers by demonstrating that they can be simulated at a tiny fraction of the cost that would be needed for a perfect quantum computer. Our algorithms compress the representations of quantum wave functions using matrix product states, which are able to capture states with low to moderate entanglement very accurately. This compression introduces a finite error rateεso that the algorithms closely mimic the behavior of real quantum computing devices. The computing time of our algorithm increases only linearly withNandDin sharp contrast with exact simulation algorithms. We illustrate our algorithms with simulations of random circuits for qubits connected in both one- and two-dimensional lattices. We find thatεcan be decreased at a polynomial cost in computing power down to a minimum errorε∞. Getting belowε∞requires computing resources that increase exponentially withε∞/ε. For a two-dimensional array ofN=54qubits and a circuit with control-Z gates, error rates better than state-of-the-art devices can be obtained on a laptop in a few hours. For more complex gates such as a swap gate followed by a controlled rotation, the error rate increases by a factor 3 for similar computing time. Our results suggest that, despite the high fidelity reached by quantum devices, only a tiny fraction(∼10−8)of the system Hilbert space is actually being exploited.
Quantitative evaluation of deep learning frameworks in heterogeneous computing environment
Deep learning frameworks are powerful tools to support model training. They dispatch operators by mapping them into a series of kernel functions and launching these kernel functions to specialized devices such as GPUs. However, there is little known about the performance of dispatching and mapping mechanisms in different frameworks, although these mechanisms directly affect training time. This paper presents a performance evaluation in various frameworks by examining their kernel function efficiency and operator dispatching mechanisms. We introduce two evaluation metrics, device computing time (DCT) and device occupancy ratio (DOR), based on the device’s active and idle states. To ensure comparable evaluation results, we propose a three-step verification method including hyper-parameter, model, and updating method equivalences. Due to inequivalent implementations in frameworks, we present an equivalence adjustment method based on the number of operators. Our evaluation results demonstrate the device utilization capability of five frameworks, namely PyTorch, TensorFlow 1, TensorFlow 2, MXNet, and PaddlePaddle, and reveal the potential for further optimizing the training performance of deep learning frameworks.
Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing
Neuromorphic computers could overcome efficiency bottlenecks inherent to conventional computing through parallel programming and readout of artificial neural network weights in a crossbar memory array. However, selective and linear weight updates and < 10-nanoampere read currents are required for learning that surpasses conventional computing efficiency. We introduce an ionic floating-gate memory array based on a polymer redox transistor connected to a conductive-bridge memory (CBM). Selective and linear programming of a redox transistor array is executed in parallel by overcoming the bridging threshold voltage of the CBMs. Synaptic weight readout with currents < 10 nanoamperes is achieved by diluting the conductive polymer with an insulator to decrease the conductance. The redox transistors endure >1 billion write-read operations and support 1-megahertz write-read frequencies.
MEGA12: Molecular Evolutionary Genetic Analysis Version 12 for Adaptive and Green Computing
Abstract We introduce the 12th version of the Molecular Evolutionary Genetics Analysis (MEGA12) software. This latest version brings many significant improvements by reducing the computational time needed for selecting optimal substitution models and conducting bootstrap tests on phylogenies using maximum likelihood (ML) methods. These improvements are achieved by implementing heuristics that minimize likely unnecessary computations. Analyses of empirical and simulated datasets show substantial time savings by using these heuristics without compromising the accuracy of results. MEGA12 also links-in an evolutionary sparse learning approach to identify fragile clades and associated sequences in evolutionary trees inferred through phylogenomic analyses. In addition, this version includes fine-grained parallelization for ML analyses, support for high-resolution monitors, and an enhanced Tree Explorer. MEGA12 can be downloaded from https://www.megasoftware.net.
Real-Time Fire Monitoring and Visualization for the Post-Ignition Fire State in a Building
During a fire event, environmental threats to building occupants and first responders include extreme temperatures, toxic gases, disorientation due to poor visibility coupled with unfamiliar surroundings, and a changing indoor environment. In addition to these hazards, firefighters often lack critical information for making decisions once on the scene. The lack of information coupled with the dynamics of natural fire events leads to several near-misses, injuries, and deaths each year. Additionally, these challenges slow the rescue time of building occupants and prolong the suppression of the fire. Integrating real-time measurements from sensors into the fire intervention strategy may provide an opportunity for a new technological advancement to improve the practice of firefighting. In this study, a computational framework using Lightweight Communications and Marshalling was developed for connecting real-time fire data to an event detection sub-model to demonstrate how computing can be used for fire monitoring and sensor-assisted firefighting. A post-processed example using these monitoring computations in conjunction with a building information model is provided as a demonstration for presenting real-time data in the field. This work serves as a step towards an intelligent firefighting system based on real-time computing tools.
Anytime Monte Carlo
Monte Carlo algorithms simulates some prescribed number of samples, taking some random real time to complete the computations necessary. This work considers the converse: to impose a real-time budget on the computation, which results in the number of samples simulated being random. To complicate matters, the real time taken for each simulation may depend on the sample produced, so that the samples themselves are not independent of their number, and a length bias with respect to compute time is apparent. This is especially problematic when a Markov chain Monte Carlo (MCMC) algorithm is used and the final state of the Markov chain—rather than an average over all states—is required, which is the case in parallel tempering implementations of MCMC. The length bias does not diminish with the compute budget in this case. It also occurs in sequential Monte Carlo (SMC) algorithms, which is the focus of this paper. We propose an anytime framework to address the concern, using a continuous-time Markov jump process to study the progress of the computation in real time. We first show that for any MCMC algorithm, the length bias of the final state’s distribution due to the imposed real-time computing budget can be eliminated by using a multiple chain construction. The utility of this construction is then demonstrated on a large-scale SMC $ {}^2 $ implementation, using four billion particles distributed across a cluster of 128 graphics processing units on the Amazon EC2 service. The anytime framework imposes a real-time budget on the MCMC move steps within the SMC $ {}^2 $ algorithm, ensuring that all processors are simultaneously ready for the resampling step, demonstrably reducing idleness to due waiting times and providing substantial control over the total compute budget.