Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
30
result(s) for
"Volterra Neural Integral Equation"
Sort by:
Introducing the Second-Order Features Adjoint Sensitivity Analysis Methodology for Neural Integral Equations of the Volterra Type: Mathematical Methodology and Illustrative Application to Nuclear Engineering
2025
This work presents the general mathematical frameworks of the “First and Second-Order Features Adjoint Sensitivity Analysis Methodology for Neural Integral Equations of Volterra Type” designated as the 1st-FASAM-NIE-V and the 2nd-FASAM-NIE-V methodologies, respectively. Using a single large-scale (adjoint) computation, the 1st-FASAM-NIE-V enables the most efficient computation of the exact expressions of all first-order sensitivities of the decoder response to the feature functions and also with respect to the optimal values of the NIE-net’s parameters/weights after the respective NIE-Volterra-net was optimized to represent the underlying physical system. The computation of all second-order sensitivities with respect to the feature functions using the 2nd-FASAM-NIE-V requires as many large-scale computations as there are first-order sensitivities of the decoder response with respect to the feature functions. Subsequently, the second-order sensitivities of the decoder response with respect to the primary model parameters are obtained trivially by applying the “chain-rule of differentiation” to the second-order sensitivities with respect to the feature functions. The application of the 1st-FASAM-NIE-V and the 2nd-FASAM-NIE-V methodologies is illustrated by using a well-known model for neutron slowing down in a homogeneous hydrogenous medium, which yields tractable closed-form exact explicit expressions for all quantities of interest, including the various adjoint sensitivity functions and first- and second-order sensitivities of the decoder response with respect to all feature functions and also primary model parameters.
Journal Article
A Neural Network Approach for Solving a Class of Fractional Optimal Control Problems
by
Pakdaman, Morteza
,
Effati, Sohrab
,
Javad Sabouri K.
in
Approximation
,
Artificial Intelligence
,
Boundary conditions
2017
In this paper the perceptron neural networks are applied to approximate the solution of fractional optimal control problems. The necessary (and also sufficient in most cases) optimality conditions are stated in a form of fractional two-point boundary value problem. Then this problem is converted to a Volterra integral equation. By using perceptron neural network’s ability in approximating a nonlinear function, first we propose approximating functions to estimate control, state and co-state functions which they satisfy the initial or boundary conditions. The approximating functions contain neural network with unknown weights. Using an optimization approach, the weights are adjusted such that the approximating functions satisfy the optimality conditions of fractional optimal control problem. Numerical results illustrate the advantages of the method.
Journal Article
Finite-time projective synchronization of memristor-based delay fractional-order neural networks
by
Zheng, Mingwen
,
Yang, Yixian
,
Xiao, Jinghua
in
Automotive Engineering
,
Classical Mechanics
,
Control
2017
This paper mainly investigates the finite-time projective synchronization problem of memristor-based delay fractional-order neural networks (MDFNNs). By using the definition of finite-time projective synchronization, combined with the memristor model, set-valued map and differential inclusion theory, Gronwall–Bellman integral inequality and Volterra-integral equation, the finite-time projective of MDFNNs is achieved via the linear feedback controller. Novel sufficient conditions are obtained to guarantee the finite-time projective synchronization of the drive-response MDFNNs. Besides, we also analyze the feasible region of the settling time. Finally, two numerical examples are given to show the effectiveness of the proposed results.
Journal Article
The First- and Second-Order Features Adjoint Sensitivity Analysis Methodologies for Neural Integro-Differential Equations of Volterra Type: Mathematical Framework and Illustrative Application to a Nonlinear Heat Conduction Model
2025
This work presents the mathematical frameworks of the “First-Order Features Adjoint Sensitivity Analysis Methodology for Neural Integro-Differential Equations of Volterra-Type” (1st-FASAM-NIDE-V) and the “Second-Order Features Adjoint Sensitivity Analysis Methodology for Neural Integro-Differential Equations of Volterra-Type” (2nd-FASAM-NIDE-V). It is shown that the 1st-FASAM-NIDE-V methodology enables the efficient computation of exactly-determined first-order sensitivities of the decoder response with respect to the optimized NIDE-V parameters, requiring a single “large-scale” computation for solving the 1st-Level Adjoint Sensitivity System (1st-LASS), regardless of the number of weights/parameters underlying the NIE-net. The 2nd-FASAM-NIDE-V methodology enables the computation, with unparalleled efficiency, of the second-order sensitivities of decoder responses with respect to the optimized/trained weights involved in the NIDE-V’s decoder, hidden layers, and encoder, requiring only as many “large-scale” computations as there are non-zero first-order sensitivities with respect to the feature functions. These characteristics of the 1st-FASAM-NIDE-V and 2nd-FASAM-NIDE-V are illustrated by considering a nonlinear heat conduction model that admits analytical solutions, enabling the exact verification of the expressions obtained for the first- and second-order sensitivities of NIDE-V decoder responses with respect to the model’s functions of parameters (weights) that characterize the heat conduction model.
Journal Article
Homotopy Analysis Method and Physics-Informed Neural Networks for Solving Volterra Integral Equations with Discontinuous Kernels
2025
This paper addresses first- and second-kind Volterra integral equations (VIEs) with discontinuous kernels. A hybrid method combining the Homotopy Analysis Method (HAM) and Physics-Informed Neural Networks (PINNs) is developed. The convergence of the HAM is analyzed. Benchmark examples confirm that the proposed HAM-PINNs approach achieves high accuracy and robustness, demonstrating its effectiveness for complex kernel structures.
Journal Article
Hyers-Ulam Stability of Volterra Type Integro-Differential Euler Equations with Delay
by
Tian, Siyu
,
Shao, Jing
,
Zheng, Zhaowen
in
Bibliographic literature
,
Differential equations
,
Euler-Lagrange equation
2025
In this paper, the Hyers-Ulam stability and the Hyers-Ulam-Rassias stability of Volterra type integro-differential Euler equation are studied. Using Gronwall type inequality and the fixed point approach, the Hyers-Ulam stability, Hyers-Ulam-Rassias stability of Volterra type integro-differential Euler equations are discussed in details under four mutually exclusive cases. Four examples are provided to illustrate applications of given results. The obtained results can be useful for applied researchers in numerous scientific areas.
Journal Article
Dynamical transition in controllable quantum neural networks with large depth
2024
Understanding the training dynamics of quantum neural networks is a fundamental task in quantum information science with wide impact in physics, chemistry and machine learning. In this work, we show that the late-time training dynamics of quantum neural networks with a quadratic loss function can be described by the generalized Lotka-Volterra equations, leading to a transcritical bifurcation transition in the dynamics. When the targeted value of loss function crosses the minimum achievable value from above to below, the dynamics evolve from a frozen-kernel dynamics to a frozen-error dynamics, showing a duality between the quantum neural tangent kernel and the total error. In both regions, the convergence towards the fixed point is exponential, while at the critical point becomes polynomial. We provide a non-perturbative analytical theory to explain the transition via a restricted Haar ensemble at late time, when the output state approaches the steady state. Via mapping the Hessian to an effective Hamiltonian, we also identify a linearly vanishing gap at the transition point. Compared with the linear loss function, we show that a quadratic loss function within the frozen-error dynamics enables a speedup in the training convergence. The theory findings are verified experimentally on IBM quantum devices.
Understanding the training dynamics of quantum neural networks is a fundamental task in quantum information science. Here, the authors show how these follow generalized Lotka-Volterra equations, revealing a transition between frozen-kernel, critical point and frozen-error dynamics. Theoretical findings, validated on IBM devices, provide insight to cost function design.
Journal Article
On the Performance of Physics-Based Neural Networks for Symmetric and Asymmetric Domains: A Comparative Study and Hyperparameter Analysis
by
Pleszczyński, Mariusz
,
Brociek, Rafał
,
Mughal, Dawood Asghar
in
Accuracy
,
Approximation
,
Asymmetry
2025
This work investigates the use of physics-informed neural networks (PINNs) for solving representative classes of differential and integro-differential equations, including the Burgers, Poisson, and Volterra equations. The examples presented are chosen to address both symmetric and asymmetric domains. PINNs integrate prior physical knowledge with the approximation capabilities of neural networks, allowing the modeling of physical phenomena without explicit domain discretization. In addition to evaluating accuracy against analytical solutions (where available) and established numerical methods, the study systematically examines the impact of key hyperparameters—such as the number of hidden layers, neurons per layer, and training points—on solution quality and stability. The impact of a symmetric domain on solution speed is also analyzed. The experimental results highlight the strengths and limitations of PINNs and provide practical guidelines for their effective application as an alternative or complement to traditional computational approaches.
Journal Article
Approximate solutions to several classes of Volterra and Fredholm integral equations using the neural network algorithm based on the sine-cosine basis function and extreme learning machine
2023
In this study, we investigate a new neural network method to solve Volterra and Fredholm integral equations based on the sine-cosine basis function and extreme learning machine (ELM) algorithm. Considering the ELM algorithm, sine-cosine basis functions, and several classes of integral equations, the improved model is designed. The novel neural network model consists of an input layer, a hidden layer, and an output layer, in which the hidden layer is eliminated by utilizing the sine-cosine basis function. Meanwhile, by using the characteristics of the ELM algorithm that the hidden layer biases and the input weights of the input and hidden layers are fully automatically implemented without iterative tuning, we can greatly reduce the model complexity and improve the calculation speed. Furthermore, the problem of finding network parameters is converted into solving a set of linear equations. One advantage of this method is that not only we can obtain good numerical solutions for the first- and second-kind Volterra integral equations but also we can obtain acceptable solutions for the first- and second-kind Fredholm integral equations and Volterra–Fredholm integral equations. Another advantage is that the improved algorithm provides the approximate solution of several kinds of linear integral equations in closed form (i.e., continuous and differentiable). Thus, we can obtain the solution at any point. Several numerical experiments are performed to solve various types of integral equations for illustrating the reliability and efficiency of the proposed method. Experimental results verify that the proposed method can achieve a very high accuracy and strong generalization ability.
Journal Article
Multi-variable Volterra kernels identification using time-delay neural networks: application to unsteady aerodynamic loading
2019
In the last decades, the Volterra series theory has been used to construct reduced-order models of nonlinear systems in engineering and applied sciences. For the particular case of weakly nonlinear aerodynamic and aeroelastic systems, the Volterra series theory has been tested as an alternative to the high computing costs of CFD methods. The Volterra series model determination depends on identifying the kernels associated with the respective convolution integrals. The Volterra kernels identification has been tried in many ways, but the majority of them addresses only the direct kernels of single-input, single-output nonlinear systems. However, multiple-input, multiple-output relations are the most typical case for many dynamic systems. In this case, the so-called Volterra cross-kernels represent the internal couplings between multiple inputs. Not many generalizations of the single-input kernel identification methods to multi-input Volterra kernels are available in the literature. This work proposes a methodology for the identification of Volterra direct kernels and cross-kernels, which is based on time-delay neural networks and the relationship between the kernels functions and the internal parameters of the network. Expressions to derive the
p
th-order Volterra direct kernels and cross-kernels from the internal parameters of a trained time-delay neural network are derived. The method is checked with a two-degree-of-freedom, two-input, one-output nonlinear system to demonstrate its capabilities. The application to a mildly nonlinear unsteady aerodynamic loading due to pitching and heaving motions of an airfoil is also evaluated. The Volterra direct kernels and cross-kernels of up to third order are successfully identified using training datasets computed with CFD simulations of the Euler equations. Comparisons between CFD simulations and Volterra model predictions are presented, thereby ensuring the potential of the method to systematically extract kernels from neural networks.
Journal Article