Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
137 result(s) for "Smooth gradients"
Sort by:
Using the Chou’s 5-steps rule to predict splice junctions with interpretable bidirectional long short-term memory networks
Neural models have been able to obtain state-of-the-art performances on several genome sequence-based prediction tasks. Such models take only nucleotide sequences as input and learn relevant features on their own. However, extracting the interpretable motifs from the model remains a challenge. This work explores various existing visualization techniques in their ability to infer relevant sequence information learnt by a recurrent neural network (RNN) on the task of splice junction identification. The visualization techniques have been modulated to suit the genome sequences as input. The visualizations inspect genomic regions at the level of a single nucleotide as well as a span of consecutive nucleotides. This inspection is performed based on the modification of input sequences (perturbation based) or the embedding space (back-propagation based). We infer features pertaining to both canonical and non-canonical splicing from a single neural model. Results indicate that the visualization techniques produce comparable performances for branchpoint detection. However, in the case of canonical donor and acceptor junction motifs, perturbation based visualizations perform better than back-propagation based visualizations, and vice-versa for non-canonical motifs. The source code of our stand-alone SpliceVisuL tool is available at https://github.com/aaiitggrp/SpliceVisuL. [Display omitted] •We employ BLSTM network with attention for the prediction of splice junctions.•The proposed architecture, named SpliceVisuL, achieves state-of-the-art performance.•Some visualization techniques are redesigned to comprehend genome sequences.•Features learnt by the model are extracted and validated with the existing knowledge.•A comparative study of the visualizations is done in terms of the learnt features.
A quantitative study of the effects of a dual layer coating drug-eluting stent on safety and efficacy
A key strategy for increasing drug mass (DM) while maintaining good safety is to improve the drug release profile (RP). We designed a dual layer coating drug-eluting stent (DES) that exhibited smaller concentration gradients between the coating and the artery wall and significantly impacted the drug RP. However, a detailed understanding of the effects of the DES designed by our team on safety and efficacy is still lacking. The objective of this study was to provide a comprehensive multiscale computational framework that would allow us to probe the safety and efficacy of the DES we designed. This framework consisted of four coupled modules, namely (1) a mechanical stimuli module, simulating mechanical stimuli caused by percutaneous coronary intervention through a finite element analysis, (2) an inflammation module, simulating inflammation of vascular smooth muscle cells (VSMCs) induced by mechanical stimuli through an agent-based model (ABM), (3) a drug transport module, simulating drug transport through a continuum-based approach, and (4) a mitosis module, simulating VSMC mitosis through an ABM. Our results indicated that when the DM increased to two times the initial DM value, the DES we designed had higher safety and lower efficacy values than a conventional DES. When the DM increased to five times the initial DM value, the DES we designed had higher safety than a conventional DES, and negligible differences in efficacy compared with a conventional DES. In summary, the DES we designed exhibited a significant advantage in safety, but a slightly reduced efficacy compared with that of a conventional DES.
First-order methods of smooth convex optimization with inexact oracle
We introduce the notion of inexact first-order oracle and analyze the behavior of several first-order methods of smooth convex optimization used with such an oracle. This notion of inexact oracle naturally appears in the context of smoothing techniques, Moreau–Yosida regularization, Augmented Lagrangians and many other situations. We derive complexity estimates for primal, dual and fast gradient methods, and study in particular their dependence on the accuracy of the oracle and the desired accuracy of the objective function. We observe that the superiority of fast gradient methods over the classical ones is no longer absolute when an inexact oracle is used. We prove that, contrary to simple gradient schemes, fast gradient methods must necessarily suffer from error accumulation. Finally, we show that the notion of inexact oracle allows the application of first-order methods of smooth convex optimization to solve non-smooth or weakly smooth convex problems.
A unifying perspective: the relaxed linear micromorphic continuum
We formulate a relaxed linear elastic micromorphic continuum model with symmetric Cauchy force stresses and curvature contribution depending only on the micro-dislocation tensor. Our relaxed model is still able to fully describe rotation of the microstructure and to predict nonpolar size effects. It is intended for the homogenized description of highly heterogeneous, but nonpolar materials with microstructure liable to slip and fracture. In contrast to classical linear micromorphic models, our free energy is not uniformly pointwise positive definite in the control of the independent constitutive variables. The new relaxed micromorphic model supports well-posedness results for the dynamic and static case. There, decisive use is made of new coercive inequalities recently proved by Neff, Pauly and Witsch and by Bauer, Neff, Pauly and Starke. The new relaxed micromorphic formulation can be related to dislocation dynamics, gradient plasticity and seismic processes of earthquakes. It unifies and simplifies the understanding of the linear micromorphic models.
First-order methods almost always avoid strict saddle points
We establish that first-order methods avoid strict saddle points for almost all initializations. Our results apply to a wide variety of first-order methods, including (manifold) gradient descent, block coordinate descent, mirror descent and variants thereof. The connecting thread is that such algorithms can be studied from a dynamical systems perspective in which appropriate instantiations of the Stable Manifold Theorem allow for a global stability analysis. Thus, neither access to second-order derivative information nor randomness beyond initialization is necessary to provably avoid strict saddle points.
Non-smooth classification model based on new smoothing technique
This work describes a framework for solving support vector machine with kernel (SVMK). Recently, it has been proved that the use of non-smooth loss function for supervised learning problem gives more efficient results [1]. This gives the idea of solving the SVMK problem based on hinge loss function. However, the hinge loss function is non-differentiable (we can't use the standard optimization methods to minimize the empirical risk). To overcome this difficulty, a special smoothing technique for the hinge loss is proposed. Thus, the obtained smooth problem combined with Tikhonov regularization is solved using a stochastic gradient descent method. Finally, some numerical experiments on academic and real-life datasets are presented to show the efficiency of the proposed approach.
Optimized first-order methods for smooth convex minimization
We introduce new optimized first-order methods for smooth unconstrained convex minimization. Drori and Teboulle (Math Program 145(1–2):451–482, 2014 . doi: 10.1007/s10107-013-0653-0 ) recently described a numerical method for computing the N -iteration optimal step coefficients in a class of first-order algorithms that includes gradient methods, heavy-ball methods (Polyak in USSR Comput Math Math Phys 4(5):1–17, 1964 . doi: 10.1016/0041-5553(64)90137-5 ), and Nesterov’s fast gradient methods (Nesterov in Sov Math Dokl 27(2):372–376, 1983 ; Math Program 103(1):127–152, 2005 . doi: 10.1007/s10107-004-0552-5 ). However, the numerical method in Drori and Teboulle ( 2014 ) is computationally expensive for large N , and the corresponding numerically optimized first-order algorithm in Drori and Teboulle ( 2014 ) requires impractical memory and computation for large-scale optimization problems. In this paper, we propose optimized first-order algorithms that achieve a convergence bound that is two times smaller than for Nesterov’s fast gradient methods; our bound is found analytically and refines the numerical bound in Drori and Teboulle ( 2014 ). Furthermore, the proposed optimized first-order methods have efficient forms that are remarkably similar to Nesterov’s fast gradient methods.
A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
We consider minimization of a smooth nonconvex objective function using an iterative algorithm based on Newton’s method and the linear conjugate gradient algorithm, with explicit detection and use of negative curvature directions for the Hessian of the objective function. The algorithm tracks Newton-conjugate gradient procedures developed in the 1980s closely, but includes enhancements that allow worst-case complexity results to be proved for convergence to points that satisfy approximate first-order and second-order optimality conditions. The complexity results match the best known results in the literature for second-order methods.
Frontal-to-visual information flow explains predictive motion tracking
•Multivariate EEG recordings reveal the neural representation of motion prediction.•Delta oscillation dominantly represents motion prediction.•Delta-phase gradient carries information flow for motion prediction.•Anterior-to-posterior information flow explains the predictive behavioral bias. Predictive tracking demonstrates our ability to maintain a line of vision on moving objects even when they temporarily disappear. Models of smooth pursuit eye movements posit that our brain achieves this ability by directly streamlining motor programming from continuously updated sensory motion information. To test this hypothesis, we obtained sensory motion representation from multivariate electroencephalogram activity while human participants covertly tracked a temporarily occluded moving stimulus with their eyes remaining stationary at the fixation point. The sensory motion representation of the occluded target evolves to its maximum strength at the expected timing of reappearance, suggesting a timely modulation of the internal model of the visual target. We further characterize the spatiotemporal dynamics of the task-relevant motion information by computing the phase gradients of slow oscillations. We discovered a predominant posterior-to-anterior phase gradient immediately after stimulus occlusion; however, at the expected timing of reappearance, the axis reverses the gradient, becoming anterior-to-posterior. The behavioral bias of smooth pursuit eye movements, which is a signature of the predictive process of the pursuit, was correlated with the posterior division of the gradient. These results suggest that the sensory motion area modulated by the prediction signal is involved in updating motor programming.
Optimizing the Efficiency of First-Order Methods for Decreasing the Gradient of Smooth Convex Functions
This paper optimizes the step coefficients of first-order methods for smooth convex minimization in terms of the worst-case convergence bound (i.e., efficiency) of the decrease in the gradient norm. This work is based on the performance estimation problem approach. The worst-case gradient bound of the resulting method is optimal up to a constant for large-dimensional smooth convex minimization problems, under the initial bounded condition on the cost function value. This paper then illustrates that the proposed method has a computationally efficient form that is similar to the optimized gradient method.