Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
64,970
result(s) for
"Mathematical methods in physics"
Sort by:
Self-similarity of complex networks
by
Makse, Hernán A.
,
Havlin, Shlomo
,
Song, Chaoming
in
Exact sciences and technology
,
Fractals
,
Humanities and Social Sciences
2005
Complex matters
‘Scale-free’ networks, such as linked web pages, people in social groups, or cellular interaction networks show uneven connectivity distributions: there is no typical number of links per node. Many of these networks also exhibit the ‘small-world’ effect, called ‘six degrees of separation’ when applied to sociology. A new analysis of such networks, in which nodes are partitioned into boxes of different sizes, reveals that they share the surprising feature of self-similarity. In other words, these networks are constructed of fractal-like self-repeating patterns or degrees of separation. This may help explain how the scale-free property of such networks arises.
Complex networks have been studied extensively owing to their relevance to many real systems such as the world-wide web, the Internet, energy landscapes and biological and social networks
1
,
2
,
3
,
4
,
5
. A large number of real networks are referred to as ‘scale-free’ because they show a power-law distribution of the number of links per node
1
,
6
,
7
. However, it is widely believed that complex networks are not invariant or self-similar under a length-scale transformation. This conclusion originates from the ‘small-world’ property of these networks, which implies that the number of nodes increases exponentially with the ‘diameter’ of the network
8
,
9
,
10
,
11
, rather than the power-law relation expected for a self-similar structure. Here we analyse a variety of real complex networks and find that, on the contrary, they consist of self-repeating patterns on all length scales. This result is achieved by the application of a renormalization procedure that coarse-grains the system into boxes containing nodes within a given ‘size’. We identify a power-law relation between the number of boxes needed to cover the network and the size of the box, defining a finite self-similar exponent. These fundamental properties help to explain the scale-free nature of complex networks and suggest a common self-organization dynamics.
Journal Article
Statistical Approach to Quantum Field Theory
by
Wipf, Andreas
in
Complex Systems
,
Elementary Particles, Quantum Field Theory
,
Field theory (Physics)
2013
This book opens with a self-contained introduction to path integrals in Euclidean quantum mechanics and statistical mechanics, and moves on to cover lattice field theory, spin systems, gauge theories and more. Each chapter ends with illustrative problems.
Generalized Voice-Leading Spaces
by
Quinn, Ian
,
Tymoczko, Dmitri
,
Callender, Clifton
in
Equivalence relation
,
Exact sciences and technology
,
Geometry
2008
Western musicians traditionally classify pitch sequences by disregarding the effects of five musical transformations: octave shift, permutation, transposition, inversion, and cardinality change. We model this process mathematically, showing that it produces 32 equivalence relations on chords, 243 equivalence relations on chord sequences, and 32 families of geometrical quotient spaces, in which both chords and chord sequences are represented. This model reveals connections between music-theoretical concepts, yields new analytical tools, unifies existing geometrical representations, and suggests a way to understand similarity between chord types.
Journal Article
Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
by
Esfahani, Peyman Mohajerin
,
Kuhn, Daniel
in
Economic models
,
Global optimization
,
Monte Carlo simulation
2018
We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting distributionally robust optimization problems rely on global optimization techniques, which quickly become computationally excruciating. In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs—in many interesting cases even as tractable linear programs. Leveraging recent measure concentration results, we also show that their solutions enjoy powerful finite-sample performance guarantees. Our theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quantification.
Journal Article
Minimizing finite sums with the stochastic average gradient
by
Le Roux, Nicolas
,
Bach, Francis
,
Schmidt, Mark
in
Algorithms
,
Calculus of Variations and Optimal Control; Optimization
,
Combinatorics
2017
We analyze the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method’s iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from
O
(
1
/
k
)
to
O
(1 /
k
) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear
O
(1 /
k
) to a linear convergence rate of the form
O
(
ρ
k
)
for
ρ
<
1
. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. This extends our earlier work Le Roux et al. (Adv Neural Inf Process Syst,
2012
), which only lead to a faster rate for well-conditioned strongly-convex problems. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.
Journal Article
Ladder symmetries and Love numbers of Reissner-Nordström black holes
by
Rai, Mudit
,
Santoni, Luca
in
Black Holes
,
Classical and Quantum Gravitation
,
Classical Theories of Gravity
2024
A
bstract
It is well known that asymptotically flat black holes in general relativity have vanishing tidal Love numbers. In the case of Schwarzschild and Kerr black holes, this property has been shown to be a consequence of a hidden structure of ladder symmetries for the perturbations. In this work, we extend the ladder symmetries to non-rotating charged black holes in general relativity. As opposed to previous works in this context, we adopt a more general definition of Love numbers, including quadratic operators that mix gravitational and electromagnetic perturbations in the point-particle effective field theory. We show that the calculation of a subset of those couplings in full general relativity is affected by an ambiguity in the split between source and response, which we resolve through an analytic continuation. As a result, we derive a novel master equation that unifies scalar, electromagnetic and gravitational perturbations around Reissner-Nordström black holes. The equation is hypergeometric and can be obtained from previous formulations via nontrivial field redefinitions, which allow to systematically remove some of the singularities and make the presence of the ladder symmetries more manifest.
Journal Article
Self-Organized Origami
2005
In origami, form follows the sequential spatial organization of folds. This requires continuous intervention and raises a natural question: Can origami arise through self-organization? We answer this affirmatively by examining the possible physical origin for the Miura-ori leaf-folding patterns that arise naturally in insect wings, leaves, and other laminae-like organelles. In particular, we point out examples where biaxial compression of an elastically supported thin film, such as that due to differential growth, shrinkage, desiccation, or thermal expansion, spontaneously generates these patterns, and we provide a simple theoretical explanation for their occurrence.
Journal Article
Lower bounds for finding stationary points I
2020
We prove lower bounds on the complexity of finding ϵ-stationary points (points x such that ‖∇f(x)‖≤ϵ) of smooth, high-dimensional, and potentially non-convex functions f. We consider oracle-based complexity measures, where an algorithm is given access to the value and all derivatives of f at a query point x. We show that for any (potentially randomized) algorithm A, there exists a function f with Lipschitz pth order derivatives such that A requires at least ϵ-(p+1)/p queries to find an ϵ-stationary point. Our lower bounds are sharp to within constants, and they show that gradient descent, cubic-regularized Newton’s method, and generalized pth order regularization are worst-case optimal within their natural function classes.
Journal Article
Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
2021
On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate O(1 / t) to O(1/t2), and in addition, O(1 / t) is optimal for convex problems. Moreover, we prove that for strongly convex problems, O(1/t2) is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods.
Journal Article
Linear convergence of first order methods for non-strongly convex optimization
by
Necoara, I
,
Nesterov, Yu
,
Glineur, F
in
Continuity (mathematics)
,
Convergence
,
Convex analysis
2019
The standard assumption for proving linear convergence of first order methods for smooth convex optimization is the strong convexity of the objective function, an assumption which does not hold for many practical applications. In this paper, we derive linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition. In particular, in the case of smooth constrained convex optimization, we provide several relaxations of the strong convexity conditions and prove that they are sufficient for getting linear convergence for several first order methods such as projected gradient, fast gradient and feasible descent methods. We also provide examples of functional classes that satisfy our proposed relaxations of strong convexity conditions. Finally, we show that the proposed relaxed strong convexity conditions cover important applications ranging from solving linear systems, Linear Programming, and dual formulations of linearly constrained convex problems.
Journal Article