Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
512 result(s) for "Rodriguez, Giuseppe"
Sort by:
Old and new parameter choice rules for discrete ill-posed problems
Linear discrete ill-posed problems are difficult to solve numerically because their solution is very sensitive to perturbations, which may stem from errors in the data and from round-off errors introduced during the solution process. The computation of a meaningful approximate solution requires that the given problem be replaced by a nearby problem that is less sensitive to disturbances. This replacement is known as regularization. A regularization parameter determines how much the regularized problem differs from the original one. The proper choice of this parameter is important for the quality of the computed solution. This paper studies the performance of known and new approaches to choosing a suitable value of the regularization parameter for the truncated singular value decomposition method and for the LSQR iterative Krylov subspace method in the situation when no accurate estimate of the norm of the error in the data is available. The regularization parameter choice rules considered include several L-curve methods, Regińska’s method and a modification thereof, extrapolation methods, the quasi-optimality criterion, rules designed for use with LSQR, as well as hybrid methods.
On Generating Synthetic Datasets for Photometric Stereo Applications
The mathematical model for photometric stereo makes several restricting assumptions, which are often not fulfilled in real-life applications. Specifically, an object surface does not always satisfies Lambert’s cosine law, leading to reflection issues. Moreover, the camera and the light source, in some situations, have to be placed at a close distance from the target, rather than at infinite distance from it. When studying algorithms for these complex situations, it is extremely useful to have at disposal synthetic datasets with known exact solutions, to assert the accuracy of a solution method. The aim of this paper is to present a Matlab package which constructs such datasets on the basis of a chosen exact solution, providing a tool for simulating various real camera/light configurations. This package, starting from the mathematical expression of a surface, or from a discrete sampling, allows the user to build a set of images matching a particular light configuration. Setting various parameters makes it possible to simulate different scenarios, which can be used to investigate the performance of reconstruction algorithms in several situations and test their response to lack of ideality in data. The ability to construct large datasets is particularly useful to train machine learning based algorithms.
Ascertaining the Ideality of Photometric Stereo Datasets under Unknown Lighting
The standard photometric stereo model makes several assumptions that are rarely verified in experimental datasets. In particular, the observed object should behave as a Lambertian reflector, and the light sources should be positioned at an infinite distance from it, along a known direction. Even when Lambert’s law is approximately fulfilled, an accurate assessment of the relative position between the light source and the target is often unavailable in real situations. The Hayakawa procedure is a computational method for estimating such information directly from data images. It occasionally breaks down when some of the available images excessively deviate from ideality. This is generally due to observing a non-Lambertian surface, or illuminating it from a close distance, or both. Indeed, in narrow shooting scenarios, typical, e.g., of archaeological excavation sites, it is impossible to position a flashlight at a sufficient distance from the observed surface. It is then necessary to understand if a given dataset is reliable and which images should be selected to better reconstruct the target. In this paper, we propose some algorithms to perform this task and explore their effectiveness.
Forward Electromagnetic Induction Modelling in a Multilayered Half-Space: An Open-Source Software Tool
Electromagnetic induction (EMI) techniques are widely used in geophysical surveying. Their success is mainly due to their easy and fast data acquisition, but the effectiveness of data inversion is strongly influenced by the quality of sensed data, resulting from suiting the device configuration to the physical features of the survey site. Forward modelling is an essential tool to optimize this aspect and design a successful surveying campaign. In this paper, a new software tool for forward EMI modelling is introduced. It extends and complements an existing open-source package for EMI data inversion, and includes an interactive graphical user interface. Its use is explained by a theoretical introduction and demonstrated through a simulated case study. The nonlinear data inversion issue is briefly discussed and the inversion module of the package is extended by a new regularized minimal-norm algorithm.
Numerical Methods for Decorrelation Stretch
Decorrelation stretch is an image enhancement technique that emphasizes color differences, which also applicable to multispectral datasets. It transforms an image so that its color plane result is uncorrelated, with assigned variances. The standard algorithm may suffer from numerical instability. Moreover, it is not able to manage degenerate cases, where color planes are linearly dependent. In this paper, we review the theory behind decorrelation stretch and propose some alternative algorithms that resolve the issues of the standard approach.
Scorepochs: A Computer-Aided Scoring Tool for Resting-State M/EEG Epochs
M/EEG resting-state analysis often requires the definition of the epoch length and the criteria in order to select which epochs to include in the subsequent steps. However, the effects of epoch selection remain scarcely investigated and the procedure used to (visually) inspect, label, and remove bad epochs is often not documented, thereby hindering the reproducibility of the reported results. In this study, we present Scorepochs, a simple and freely available tool for the automatic scoring of resting-state M/EEG epochs that aims to provide an objective method to aid M/EEG experts during the epoch selection procedure. We tested our approach on a freely available EEG dataset containing recordings from 109 subjects using the BCI2000 64 channel system.
SoftNet: A Package for the Analysis of Complex Networks
Identifying the most important nodes according to specific centrality indices is an important issue in network analysis. Node metrics based on the computation of functions of the adjacency matrix of a network were defined by Estrada and his collaborators in various papers. This paper describes a MATLAB toolbox for computing such centrality indices using efficient numerical algorithms based on the connection between the Lanczos method and Gauss-type quadrature rules.
Chained structure of directed graphs with applications to social and transportation networks
The need to determine the structure of a graph arises in many applications. This paper studies directed graphs and defines the notions of ℓ -chained and { ℓ , k } -chained directed graphs. These notions reveal structural properties of directed graphs that shed light on how the nodes of the graph are connected. Applications include city planning, information transmission, and disease propagation. We also discuss the notion of in-center and out-center vertices of a directed graph, which are vertices at the center of the graph. Computed examples provide illustrations, among which is the investigation of a bus network for a city.
Iterative Methods for the Computation of the Perron Vector of Adjacency Matrices
The power method is commonly applied to compute the Perron vector of large adjacency matrices. Blondel et al. [SIAM Rev. 46, 2004] investigated its performance when the adjacency matrix has multiple eigenvalues of the same magnitude. It is well known that the Lanczos method typically requires fewer iterations than the power method to determine eigenvectors with the desired accuracy. However, the Lanczos method demands more computer storage, which may make it impractical to apply to very large problems. The present paper adapts the analysis by Blondel et al. to the Lanczos and restarted Lanczos methods. The restarted methods are found to yield fast convergence and to require less computer storage than the Lanczos method. Computed examples illustrate the theory presented. Applications of the Arnoldi method are also discussed.
Chained graphs and some applications
This paper introduces the notions of chained and semi-chained graphs. The chain of a graph, when existent, refines the notion of bipartivity and conveys important structural information. Also the notion of a center vertex v c is introduced. It is a vertex, whose sum of p powers of distances to all other vertices in the graph is minimal, where the distance between a pair of vertices { v c , v } is measured by the minimal number of edges that have to be traversed to go from v c to v . This concept extends the definition of closeness centrality. Applications in which the center node is important include information transmission and city planning. Algorithms for the identification of approximate central nodes are provided and computed examples are presented.