Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
541
result(s) for
"RESEARCH SPOTLIGHTS"
Sort by:
Julia: A Fresh Approach to Numerical Computing
2017
Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be \"laws of nature\" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.
Journal Article
JuMP: A Modeling Language for Mathematical Optimization
2017
JuMP is an open-source modeling language that allows users to express a wide range of optimization problems (linear, mixed-integer, quadratic, conic-quadratic, semidefinite, and nonlinear) in a high-level, algebraic syntax. JuMP takes advantage of advanced features of the Julia programming language to offer unique functionality while achieving performance on par with commercial modeling tools for standard tasks. In this work we will provide benchmarks, present the novel aspects of the implementation, and discuss how JuMP can be extended to new problem classes and composed with state-of-the-art tools for visualization and interactivity.
Journal Article
A New Class of Efficient and Robust Energy Stable Schemes for Gradient Flows
2019
We propose a new numerical technique to deal with nonlinear terms in gradient flows. By introducing a scalar auxiliary variable (SAV), we construct efficient and robust energy stable schemes for a large class of gradient flows. The SAV approach is not restricted to specific forms of the nonlinear part of the free energy and only requires solving decoupled linear equations with constant coefficients. We use this technique to deal with several challenging applications which cannot be easily handled by existing approaches, and we present convincing numerical results to show that our schemes are not only much more efficient and easy to implement, but can also better capture the physical properties in these models. Based on this SAV approach, we can construct unconditionally second-order energy stable schemes, and we can easily construct even third- or fourth-order BDF schemes which, although not unconditionally stable, are very robust in practice. In particular, when coupled with an adaptive time stepping strategy, the SAV approach can be extremely efficient and accurate.
Journal Article
Bayesian Probabilistic Numerical Methods
2019
Over forty years ago average-case error was proposed in the applied mathematics literature as an alternative criterion with which to assess numerical methods. In contrast to worst-case error, this criterion relies on the construction of a probability measure over candidate numerical tasks, and numerical methods are assessed based on their average performance over those tasks with respect to the measure. This paper goes further and establishes Bayesian probabilistic numerical methods as solutions to certain inverse problems based upon the numerical task within the Bayesian framework. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well defined, encompassing both the nonlinear and non-Gaussian contexts. For general computation, a numerical approximation scheme is proposed and its asymptotic convergence established. The theoretical development is extended to pipelines of computation, wherein probabilistic numerical methods are composed to solve more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, and a challenging industrial application is presented.
Journal Article
Multigrid with Rough Coefficients and Multiresolution Operator Decomposition from Hierarchical Information Games
2017
We introduce a near-linear complexity (geometric and meshless/algebraic) multigrid/multiresolution method for PDEs with rough (L∞) coefficients with rigorous a priori accuracy and performance estimates. The method is discovered through a decision/game theory formulation of the problems of (1) identifying restriction and interpolation operators, (2) recovering a signal from incomplete measurements based on norm constraints on its image under a linear operator, and (3) gambling on the value of the solution of the PDE based on a hierarchy of nested measurements of its solution or source term. The resulting elementary gambles form a hierarchy of (deterministic) basis functions of $H_0^1\\left( \\Omega \\right)$ (gamblets) that (1) are orthogonal across subscales/subbands with respect to the scalar product induced by the energy norm of the PDE, (2) enable sparse compression of the solution space in $H_0^1\\left( \\Omega \\right)$ and (3) induce an orthogonal multiresolution operator decomposition. The operating diagram of the multigrid method is that of an inverted pyramid in which gamblets are computed locally (by virtue of their exponential decay) and hierarchically (from fine to coarse scales) and the PDE is decomposed into a hierarchy of independent linear systems with uniformly bounded condition numbers. The resulting algorithm is parallelizable both in space (via localization) and in bandwidth/subscale (subscales can be computed independently from each other). Although the method is deterministic, it has a natural Bayesian interpretation under the measure of probability emerging (as a mixed strategy) from the information game formulation, and multiresolution approximations form a martingale with respect to the filtration induced by the hierarchy of nested measurements.
Journal Article
Configuring Random Graph Models with Fixed Degree Sequences
2018
Random graph null models have found widespread application in diverse research communities analyzing network datasets, including social, information, and economic networks, as well as food webs, protein-protein interactions, and neuronal networks. The most popular random graph null models, called configuration models, are defined as uniform distributions over a space of graphs with a fixed degree sequence. Commonly, properties of an empirical network are compared to properties of an ensemble of graphs from a configuration model in order to quantify whether empirical network properties are meaningful or whether they are instead a common consequence of the particular degree sequence. In this work we study the subtle but important decisions underlying the specification of a configuration model, and we investigate the role these choices play in graph sampling procedures and a suite of applications. We place particular emphasis on the importance of specifying the appropriate graph labeling—stub-labeled or vertex-labeled—under which to consider a null model, a choice that closely connects the study of random graphs to the study of random contingency tables. We show that the choice of graph labeling is inconsequential for studies of simple graphs, but can have a significant impact on analyses of multigraphs or graphs with self-loops. The importance of these choices is demonstrated through a series of three in-depth vignettes, analyzing three different network datasets under many different configuration models and observing substantial differences in study conclusions under different models. We argue that in each case, only one of the possible configuration models is appropriate. While our work focuses on undirected static networks, it aims to guide the study of directed networks, dynamic networks, and all other network contexts that are suitably studied through the lens of random graph null models.
Journal Article
Frames and Numerical Approximation
2019
Functions of one or more variables are usually approximated with a basis: a complete, linearly independent system of functions that spans a suitable function space. The topic of this paper is the numerical approximation of functions using the more general notion of frames: that is, complete systems that are generally redundant but provide infinite representations with bounded coefficients. While frames are well known in image and signal processing, coding theory, and other areas of applied mathematics, their use in numerical analysis is far less widespread. Yet, as we show via a series of examples, frames are more flexible than bases and can be constructed easily in a range of problems where finding orthonormal bases with desirable properties (rapid convergence, high-resolution power, etc.) is difficult or impossible. For instance, we exhibit a frame which yields simple, high-order approximations of smooth, multivariate functions in arbitrary geometries. A key concern when using frames is that computing a best approximation requires solving an ill-conditioned linear system. Nonetheless, we construct a frame approximation via regularization with bounded condition number (with respect to perturbations in the data), which approximates any function up to an error of order √ϵ, or even of order ϵ with suitable modifications. Here, ϵ is a threshold value that can be chosen by the user. Crucially, rate of decay of the error down to this level is determined by the existence of approximate representations of f in the frame possessing small-norm coefficients. We demonstrate the existence of such representations in all of our examples. Overall, our analysis suggests that frames are a natural generalization of bases in which to develop numerical approximations. In particular, even in the presence of severely ill-conditioned linear systems, the frame condition imposes sufficient mathematical structure in order to give rise to accurate, well-conditioned approximations.
Journal Article
The Spacey Random Walk: A Stochastic Process for Higher-Order Data
2017
Random walks are a fundamental model in applied mathematics and are a common example of a Markov chain. The limiting stationary distribution of the Markov chain represents the fraction of the time spent in each state during the stochastic process. A standard way to compute this distribution for a random walk on a finite set of states is to compute the Perron vector of the associated transition matrix. There are algebraic analogues of this Perron vector in terms of transition probability tensors of higher-order Markov chains. These vectors are nonnegative, have dimension equal to the dimension of the state space, and sum to one, and they are derived by making an algebraic substitution in the equation for the joint-stationary distribution of a higher-order Markov chain. Here, we present the spacey random walk, a non-Markovian stochastic process whose stationary distribution is given by the tensor eigenvector. The process itself is a vertex-reinforced random walk, and its discrete dynamics are related to a continuous dynamical system. We analyze the convergence properties of these dynamics and discuss numerical methods for computing the stationary distribution. Finally, we provide several applications of the spacey random walk model in population genetics, ranking, and clustering data, and we use the process to analyze New York taxi trajectory data. This example shows definite non-Markovian structure.
Journal Article
Splines Are Universal Solutions of Linear Inverse Problems with Generalized TV Regularization
2017
Splines come in a variety of flavors that can be characterized in terms of some differential operator L. The simplest piecewise-constant model corresponds to the derivative operator. Likewise, one can extend the traditional notion of total variation by considering more general operators than the derivative. This results in the definitions of a generalized total variation seminorm and its corresponding native space, which is further identified as the direct sum of two Banach spaces. We then prove that the minimization of the generalized total variation (gTV), subject to some arbitrary (convex) consistency constraints on the linear measurements of the signal, admits nonuniform L-spline solutions with fewer knots than the number of measurements. This shows that nonuniform splines are universal solutions of continuous-domain linear inverse problems with LASSO, L₁, or total-variation-like regularization constraints. Remarkably, the type of spline is fully determined by the choice of L and does not depend on the actual nature of the measurements.
Journal Article
Opinion Dynamics and the Evolution of Social Power in Influence Networks
2015
This paper studies the evolution of self-appraisal, social power, and interpersonal influences for a group of individuals who discuss and form opinions about a sequence of issues. Our empirical model combines the averaging rule of DeGroot to describe opinion formation processes and the reflected appraisal mechanism of Friedkin to describe the dynamics of individuals' self-appraisal and social power. Given a set of relative interpersonal weights, the DeGroot–Friedkin model predicts the evolution of the influence network governing the opinion formation process. We provide a rigorous mathematical formulation of the influence network dynamics, characterize its equilibria, and establish its convergence properties for all possible structures of the relative interpersonal weights and corresponding eigenvector centrality scores. The model predicts that the social power ranking among individuals is asymptotically equal to their centrality ranking, that social power tends to accumulate at the top of the hierarchy, and that an autocratic (resp., democratic) power structure arises when the centrality scores are maximally nonuniform (resp., uniform).
Journal Article