Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Item Type
      Item Type
      Clear All
      Item Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Language
    • Place of Publication
    • Contributors
    • Location
145,507 result(s) for "Functions (mathematics)"
Sort by:
Symmetric Markov processes, time change, and boundary theory
This book gives a comprehensive and self-contained introduction to the theory of symmetric Markov processes and symmetric quasi-regular Dirichlet forms. In a detailed and accessible manner, Zhen-Qing Chen and Masatoshi Fukushima cover the essential elements and applications of the theory of symmetric Markov processes, including recurrence/transience criteria, probabilistic potential theory, additive functional theory, and time change theory. The authors develop the theory in a general framework of symmetric quasi-regular Dirichlet forms in a unified manner with that of regular Dirichlet forms, emphasizing the role of extended Dirichlet spaces and the rich interplay between the probabilistic and analytic aspects of the theory. Chen and Fukushima then address the latest advances in the theory, presented here for the first time in any book. Topics include the characterization of time-changed Markov processes in terms of Douglas integrals and a systematic account of reflected Dirichlet spaces, and the important roles such advances play in the boundary theory of symmetric Markov processes. This volume is an ideal resource for researchers and practitioners, and can also serve as a textbook for advanced graduate students. It includes examples, appendixes, and exercises with solutions.
Mathematical Expressions Useful for Tunable Properties of Simple Square Wave Generators
This paper compares two electronically controllable solutions of triangular and square wave generators benefiting from a single IC package including all necessary active elements (modular cells fabricated in I3T 0.35 µm ON Semiconductor process operating with ±1.65 V supply voltage). Internal cells are used for construction of building blocks of the generator (integrator and Schmitt trigger/comparator). Proposed solutions have adjustable parameters dependent on the values of DC control voltages and currents. Attention is given to the mathematical expressions for the advantageous tunability of these generators. Theoretical mathematical functions comparing the standard linear formula with special expression for the frequency adjustment are evaluated and compared with experimentally obtained results. Mathematical functions prove that the proposed topologies improve efficiency of tunability and reduce overall complexity of both generators. Features of proposed solutions were verified experimentally. Both single-parameter tunable designs target on the operation in bands from tens to hundreds of kHz (from 13 kHz up to 251 kHz for the driving voltage between 0.05 V and 1.0 V for the first solution; from 12 kHz up to 847 kHz for the driving current between 5 µA and 140 µA for the second solution). A comparison with similar solutions indicates beneficial performance of the proposed solutions in tunability ratio vs. driving parameter ratio and also because simplicity of circuitry is low. The qualitative evaluation and comparison of parameters of both circuits is given and confirms theoretical expectations.
Ten Equivalent Definitions of the Fractional Laplace Operator
This article discusses several definitions of the fractional Laplace operator L = — (—Δ) α /2 in R d , also known as the Riesz fractional derivative operator; here α ∈ (0,2) and d ≥ 1. This is a core example of a nonlocal pseudo-differential operator, appearing in various areas of theoretical and applied mathematics. As an operator on Lebesgue spaces ℒ p (with p ∈ [1,∞)), on the space 𝒞 0 of continuous functions vanishing at infinity and on the space 𝒞 bu of bounded uniformly continuous functions, L can be defined, among others, as a singular integral operator, as the generator of an appropriate semigroup of operators, by Bochner’s subordination, or using harmonic extensions. It is relatively easy to see that all these definitions agree on the space of appropriately smooth functions. We collect and extend known results in order to prove that in fact all these definitions are completely equivalent: on each of the above function spaces, the corresponding operators have a common domain and they coincide on that common domain.
Proximal alternating linearized minimization for nonconvex and nonsmooth problems
We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka–Łojasiewicz property, we derive a self-contained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvex-nonsmooth problems and related nonconvex proximal forward–backward algorithms with semi-algebraic problem’s data, the later property being shared by many functions arising in a wide variety of fundamental applications. A by-product of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.
The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
The alternating direction method of multipliers (ADMM) is now widely used in many fields, and its convergence was proved when two blocks of variables are alternatively updated. It is strongly desirable and practically valuable to extend the ADMM directly to the case of a multi-block convex minimization problem where its objective function is the sum of more than two separable convex functions. However, the convergence of this extension has been missing for a long time—neither an affirmative convergence proof nor an example showing its divergence is known in the literature. In this paper we give a negative answer to this long-standing open question: The direct extension of ADMM is not necessarily convergent. We present a sufficient condition to ensure the convergence of the direct extension of ADMM, and give an example to show its divergence.
Variational Physics Informed Neural Networks: the Role of Quadratures and Test Functions
In this work we analyze how quadrature rules of different precisions and piecewise polynomial test functions of different degrees affect the convergence rate of Variational Physics Informed Neural Networks (VPINN) with respect to mesh refinement, while solving elliptic boundary-value problems. Using a Petrov-Galerkin framework relying on an inf-sup condition, we derive an a priori error estimate in the energy norm between the exact solution and a suitable high-order piecewise interpolant of a computed neural network. Numerical experiments confirm the theoretical predictions and highlight the importance of the inf-sup condition. Our results suggest, somehow counterintuitively, that for smooth solutions the best strategy to achieve a high decay rate of the error consists in choosing test functions of the lowest polynomial degree, while using quadrature formulas of suitably high precision.
Matrices, Moments and Quadrature with Applications
This computationally oriented book describes and explains the mathematical relationships among matrices, moments, orthogonal polynomials, quadrature rules, and the Lanczos and conjugate gradient algorithms. The book bridges different mathematical areas to obtain algorithms to estimate bilinear forms involving two vectors and a function of the matrix. The first part of the book provides the necessary mathematical background and explains the theory. The second part describes the applications and gives numerical examples of the algorithms and techniques developed in the first part. Applications addressed in the book include computing elements of functions of matrices; obtaining estimates of the error norm in iterative methods for solving linear systems and computing parameters in least squares and total least squares; and solving ill-posed problems using Tikhonov regularization. This book will interest researchers in numerical linear algebra and matrix computations, as well as scientists and engineers working on problems involving computation of bilinear forms.
Log-gases and random matrices
Random matrix theory, both as an application and as a theory, has evolved rapidly over the past fifteen years. Log-Gases and Random Matrices gives a comprehensive account of these developments, emphasizing log-gases as a physical picture and heuristic, as well as covering topics such as beta ensembles and Jack polynomials.
Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
Classical stochastic gradient methods are well suited for minimizing expected-value objective functions. However, they do not apply to the minimization of a nonlinear function involving expected values or a composition of two expected-value functions, i.e., the problem min x E v f v ( E w [ g w ( x ) ] ) . In order to solve this stochastic composition problem, we propose a class of stochastic compositional gradient descent (SCGD) algorithms that can be viewed as stochastic versions of quasi-gradient method. SCGD update the solutions based on noisy sample gradients of f v , g w and use an auxiliary variable to track the unknown quantity E w g w ( x ) . We prove that the SCGD converge almost surely to an optimal solution for convex optimization problems, as long as such a solution exists. The convergence involves the interplay of two iterations with different time scales. For nonsmooth convex problems, the SCGD achieves a convergence rate of O ( k - 1 / 4 ) in the general case and O ( k - 2 / 3 ) in the strongly convex case, after taking k samples. For smooth convex problems, the SCGD can be accelerated to converge at a rate of O ( k - 2 / 7 ) in the general case and O ( k - 4 / 5 ) in the strongly convex case. For nonconvex problems, we prove that any limit point generated by SCGD is a stationary point, for which we also provide the convergence rate analysis. Indeed, the stochastic setting where one wants to optimize compositions of expected-value functions is very common in practice. The proposed SCGD methods find wide applications in learning, estimation, dynamic programming, etc.