Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Language
      Language
      Clear All
      Language
  • Subject
      Subject
      Clear All
      Subject
  • Item Type
      Item Type
      Clear All
      Item Type
  • Discipline
      Discipline
      Clear All
      Discipline
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
587 result(s) for "Wright, John N."
Sort by:
Dictionary learning in Fourier-transform scanning tunneling spectroscopy
Modern high-resolution microscopes are commonly used to study specimens that have dense and aperiodic spatial structure. Extracting meaningful information from images obtained from such microscopes remains a formidable challenge. Fourier analysis is commonly used to analyze the structure of such images. However, the Fourier transform fundamentally suffers from severe phase noise when applied to aperiodic images. Here, we report the development of an algorithm based on nonconvex optimization that directly uncovers the fundamental motifs present in a real-space image. Apart from being quantitatively superior to traditional Fourier analysis, we show that this algorithm also uncovers phase sensitive information about the underlying motif structure. We demonstrate its usefulness by studying scanning tunneling microscopy images of a Co-doped iron arsenide superconductor and prove that the application of the algorithm allows for the complete recovery of quasiparticle interference in this material. Aperiodic structure imaging suffers limitations when utilizing Fourier analysis. The authors report an algorithm that quantitatively overcomes these limitations based on nonconvex optimization, demonstrated by studying aperiodic structures via the phase sensitive interference in STM images.
Explaining science's success
Paul Feyeraband famously asked, what's so great about science? One answer is that it has been surprisingly successful in getting things right about the natural world, more successful than non-scientific or pre-scientific systems, religion or philosophy. Science has been able to formulate theories that have successfully predicted novel observations. It has produced theories about parts of reality that were not observable or accessible at the time those theories were first advanced, but the claims about those inaccessible areas have since turned out to be true. And science has, on occasion, advanced on more or less a priori grounds theories that subsequently turned out to be highly empirically successful. In this book the philosopher of science, John Wright delves deep into science's methodology to offer an explanation for this remarkable success story.
Detecting and Diagnosing Terrestrial Gravitational-Wave Mimics Through Feature Learning
As engineered systems grow in complexity, there is an increasing need for automatic methods that can detect, diagnose, and even correct transient anomalies that inevitably arise and can be difficult or impossible to diagnose and fix manually. Among the most sensitive and complex systems of our civilization are the detectors that search for incredibly small variations in distance caused by gravitational waves -- phenomena originally predicted by Albert Einstein to emerge and propagate through the universe as the result of collisions between black holes and other massive objects in deep space. The extreme complexity and precision of such detectors causes them to be subject to transient noise issues that can significantly limit their sensitivity and effectiveness. In this work, we present a demonstration of a method that can detect and characterize emergent transient anomalies of such massively complex systems. We illustrate the performance, precision, and adaptability of the automated solution via one of the prevalent issues limiting gravitational-wave discoveries: noise artifacts of terrestrial origin that contaminate gravitational wave observatories' highly sensitive measurements and can obscure or even mimic the faint astrophysical signals for which they are listening. Specifically, we demonstrate how a highly interpretable convolutional classifier can automatically learn to detect transient anomalies from auxiliary detector data without needing to observe the anomalies themselves. We also illustrate several other useful features of the model, including how it performs automatic variable selection to reduce tens of thousands of auxiliary data channels to only a few relevant ones; how it identifies behavioral signatures predictive of anomalies in those channels; and how it can be used to investigate individual anomalies and the channels associated with them.
Architectural Optimization and Feature Learning for High-Dimensional Time Series Datasets
As our ability to sense increases, we are experiencing a transition from data-poor problems, in which the central issue is a lack of relevant data, to data-rich problems, in which the central issue is to identify a few relevant features in a sea of observations. Motivated by applications in gravitational-wave astrophysics, we study the problem of predicting the presence of transient noise artifacts in a gravitational wave detector from a rich collection of measurements from the detector and its environment. We argue that feature learning--in which relevant features are optimized from data--is critical to achieving high accuracy. We introduce models that reduce the error rate by over 60% compared to the previous state of the art, which used fixed, hand-crafted features. Feature learning is useful not only because it improves performance on prediction tasks; the results provide valuable information about patterns associated with phenomena of interest that would otherwise be undiscoverable. In our application, features found to be associated with transient noise provide diagnostic information about its origin and suggest mitigation strategies. Learning in high-dimensional settings is challenging. Through experiments with a variety of architectures, we identify two key factors in successful models: sparsity, for selecting relevant variables within the high-dimensional observations; and depth, which confers flexibility for handling complex interactions and robustness with respect to temporal variations. We illustrate their significance through systematic experiments on real detector data. Our results provide experimental corroboration of common assumptions in the machine-learning community and have direct applicability to improving our ability to sense gravitational waves, as well as to many other problem settings with similarly high-dimensional, noisy, or partly irrelevant data.
Error correction for high-dimensional data via convex programming
Modern data processing applications in signal and image processing, web search and ranking, and bioinformatics are increasingly characterized by large quantities of very high-dimensional data. As datasets grow larger, however, methods of data collection have necessarily become less controlled, introducing corruption, outliers, and missing data. This thesis addresses the basic question of when and how simple data representations can be recovered from such non-ideal observations. In particular, we develop theoretical explanations for the efficacy of convex programming approaches to error correction for both vector and matrix-valued data. For vector-valued observations, we prove that if a signal is sufficiently sparse with respect to a highly correlated basis, then as long as the fraction of errors is bounded away from one, it can be recovered from almost any error by solving a simple convex program. This result suggests that accurate recovery of sparse and nonnegative signals is possible and computationally feasible even with almost all of the observations corrupted. For matrices, we consider the fundamental problem of recovering a low-rank matrix from large but sporadic corruption. We prove that “almost all” matrices of low enough rank can be efficiently and exactly recovered from almost all error sign-and-support patterns, again by solving a simple convex program. This result holds even when the rank is proportional to the observation dimension (up to a logarithmic factor) and a non-vanishing fraction of the observations are corrupted. It also implies the first proportional growth result for completing a low-rank matrix from a small fraction of its entries. Finally, we show how these theoretical developments lead to simple, scalable, and robust algorithms for face recognition in the presence of varying illumination and occlusion. The idea is extremely simple: seek the sparsest representation of the test image as a linear combination of training images plus a sparse error term due to occlusion. In addition to achieving excellent performance on public databases, this approach sheds light on several important issues in face recognition, such as the choice of features and robustness to corruption and occlusion.
Efficient Gravitational-wave Glitch Identification from Environmental Data Through Machine Learning
The LIGO observatories detect gravitational waves through monitoring changes in the detectors' length down to below \\(10^{-19}\\)\\,\\(m/\\sqrt{Hz}\\) variation---a small fraction of the size of the atoms that make up the detector. To achieve this sensitivity, the detector and its environment need to be closely monitored. Beyond the gravitational wave data stream, LIGO continuously records hundreds of thousands of channels of environmental and instrumental data in order to monitor for possibly minuscule variations that contribute to the detector noise. A particularly challenging issue is the appearance in the gravitational wave signal of brief, loud noise artifacts called ``glitches,'' which are environmental or instrumental in origin but can mimic true gravitational waves and therefore hinder sensitivity. Currently they are primarily identified by analysis of the gravitational wave data stream. Here we present a machine learning approach that can identify glitches by monitoring \\textit{all} environmental and detector data channels, a task that has not previously been pursued due to its scale and the number of degrees of freedom within gravitational-wave detectors. The presented method is capable of reducing the gravitational-wave detector network's false alarm rate and improving the LIGO instruments, consequently enhancing detection confidence.
Compressed Sensing Microscopy with Scanning Line Probes
In applications of scanning probe microscopy, images are acquired by raster scanning a point probe across a sample. Viewed from the perspective of compressed sensing (CS), this pointwise sampling scheme is inefficient, especially when the target image is structured. While replacing point measurements with delocalized, incoherent measurements has the potential to yield order-of-magnitude improvements in scan time, implementing the delocalized measurements of CS theory is challenging. In this paper we study a partially delocalized probe construction, in which the point probe is replaced with a continuous line, creating a sensor which essentially acquires line integrals of the target image. We show through simulations, rudimentary theoretical analysis, and experiments, that these line measurements can image sparse samples far more efficiently than traditional point measurements, provided the local features in the sample are enough separated. Despite this promise, practical reconstruction from line measurements poses additional difficulties: the measurements are partially coherent, and real measurements exhibit nonidealities. We show how to overcome these limitations using natural strategies (reweighting to cope with coherence, blind calibration for nonidealities), culminating in an end-to-end demonstration.
Dictionary Learning in Fourier Transform Scanning Tunneling Spectroscopy
Modern high-resolution microscopes, such as the scanning tunneling microscope, are commonly used to study specimens that have dense and aperiodic spatial structure. Extracting meaningful information from images obtained from such microscopes remains a formidable challenge. Fourier analysis is commonly used to analyze the underlying structure of fundamental motifs present in an image. However, the Fourier transform fundamentally suffers from severe phase noise when applied to aperiodic images. Here, we report the development of a new algorithm based on nonconvex optimization, applicable to any microscopy modality, that directly uncovers the fundamental motifs present in a real-space image. Apart from being quantitatively superior to traditional Fourier analysis, we show that this novel algorithm also uncovers phase sensitive information about the underlying motif structure. We demonstrate its usefulness by studying scanning tunneling microscopy images of a Co-doped iron arsenide superconductor and prove that the application of the algorithm allows for the complete recovery of quasiparticle interference in this material. Our phase sensitive quasiparticle interference imaging results indicate that the pairing symmetry in optimally doped NaFeAs is consistent with a sign-changing s+- order parameter.
Frontiers of Engineering
U.S. Frontiers of Engineering (USFOE) symposia bring together 100 outstanding engineers (ages 30 to 45) to exchange information about leading-edge technologies in a range of engineering fields. The 2007 symposium covered engineering trustworthy computer systems, control of protein conformations, biotechnology for fuels and chemicals, modulating and simulating human behavior, and safe water technologies. Papers in this volume describe leading-edge research on disparate tools in software security, decoding the \"mechanome,\" corn-based materials, modeling human cultural behavior, water treatment by UV irradiation, and many other topics. A speech by dinner speaker Dr. Henrique (Rico) Malvar, managing director of Microsoft Research, is also included. Appendixes provide information about contributors, the symposium program, summaries of break-out sessions, and a list of participants. This is the thirteenth volume in the USFOE series.
ARM System Developer's Guide
Over the last ten years, the ARM architecture has become one of the most pervasive architectures in the world, with more than 2 billion ARM-based processors embedded in products ranging from cell phones to automotive braking systems. A world-wide community of ARM developers in semiconductor and product design companies includes software developers, system designers and hardware engineers. To date no book has directly addressed their need to develop the system and software for an ARM-based system. This text fills that gap. This book provides a comprehensive description of the operation of the ARM core from a developer’s perspective with a clear emphasis on software. It demonstrates not only how to write efficient ARM software in C and assembly but also how to optimize code. Example code throughout the book can be integrated into commercial products or used as templates to enable quick creation of productive software. The book covers both the ARM and Thumb instruction sets, covers Intel's XScale Processors, outlines distinctions among the versions of the ARM architecture, demonstrates how to implement DSP algorithms, explains exception and interrupt handling, describes the cache technologies that surround the ARM cores as well as the most efficient memory management techniques. A final chapter looks forward to the future of the ARM architecture considering ARMv6, the latest change to the instruction set, which has been designed to improve the DSP and media processing capabilities of the architecture.* No other book describes the ARM core from a system and software perspective. * Author team combines extensive ARM software engineering experience with an in-depth knowledge of ARM developer needs. * Practical, executable code is fully explained in the book and available on the publisher's Website. * Includes a simple embedded operating system.