Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
15 result(s) for "Signoretto, Marco"
Sort by:
Regularization, optimization, kernels, and support vector machines
\"Obtaining reliable models from given data is becoming increasingly important in a wide range of different applications fields including the prediction of energy consumption, complex networks, environmental modelling, biomedicine, bioinformatics, finance, process modelling, image and signal processing, brain-computer interfaces, and others. In data-driven modelling approaches one has witnessed considerable progress in the understanding of estimating flexible nonlinear models, learning and generalization aspects, optimization methods, and structured modelling. One area of high impact both in theory and applications is kernel methods and support vector machines. Optimization problems, learning, and representations of models are key ingredients in these methods. On the other hand, considerable progress has also been made on regularization of parametric models, including methods for compressed sensing and sparsity, where convex optimization plays an important role. At the international workshop ROKS 2013 Leuven, 1 July 8-10, 2013, researchers from diverse fields were meeting on the theory and applications of regularization, optimization, kernels, and support vector machines. At this occasion the present book has been edited as a follow-up to this event, with a variety of invited contributions from presenters and scientific committee members. It is a collection of recent progress and advanced contributions on these topics, addressing methods including ...\"-- Provided by publisher.
Learning with tensors: a framework based on convex optimization and spectral regularization
We present a framework based on convex optimization and spectral regularization to perform learning when feature observations are multidimensional arrays (tensors). We give a mathematical characterization of spectral penalties for tensors and analyze a unifying class of convex optimization problems for which we present a provably convergent and scalable template algorithm. We then specialize this class of problems to perform learning both in a transductive as well as in an inductive setting. In the transductive case one has an input data tensor with missing features and, possibly, a partially observed matrix of labels. The goal is to both infer the missing input features as well as predict the missing labels. For induction, the goal is to determine a model for each learning task to be used for out of sample prediction. Each training pair consists of a multidimensional array and a set of labels each of which corresponding to related but distinct tasks. In either case the proposed technique exploits precise low multilinear rank assumptions over unknown multidimensional arrays; regularization is based on composite spectral penalties and connects to the concept of Multilinear Singular Value Decomposition. As a by-product of using a tensor-based formalism, our approach allows one to tackle the multi-task case in a natural way. Empirical studies demonstrate the merits of the proposed methods.
Improved Microarray-Based Decision Support with Graph Encoded Interactome Data
In the past, microarray studies have been criticized due to noise and the limited overlap between gene signatures. Prior biological knowledge should therefore be incorporated as side information in models based on gene expression data to improve the accuracy of diagnosis and prognosis in cancer. As prior knowledge, we investigated interaction and pathway information from the human interactome on different aspects of biological systems. By exploiting the properties of kernel methods, relations between genes with similar functions but active in alternative pathways could be incorporated in a support vector machine classifier based on spectral graph theory. Using 10 microarray data sets, we first reduced the number of data sources relevant for multiple cancer types and outcomes. Three sources on metabolic pathway information (KEGG), protein-protein interactions (OPHID) and miRNA-gene targeting (microRNA.org) outperformed the other sources with regard to the considered class of models. Both fixed and adaptive approaches were subsequently considered to combine the three corresponding classifiers. Averaging the predictions of these classifiers performed best and was significantly better than the model based on microarray data only. These results were confirmed on 6 validation microarray sets, with a significantly improved performance in 4 of them. Integrating interactome data thus improves classification of cancer outcome for the investigated microarray technologies and cancer types. Moreover, this strategy can be incorporated in any kernel method or non-linear version of a non-kernel method.
Regularization, Optimization, Kernels, and Support Vector Machines
This book is a collection of invited contributions from leading researchers in machine learning. Comprised of 21 chapters, this comprehensive reference covers the latest research and advances in regularization, sparsity, and compressed sensing; describes recent progress in convex and large-scale optimization, kernel methods, and support vector machines; and discusses output kernel learning, domain adaptation, multi-layer support vector machines, and more.
Hybrid Conditional Gradient-Smoothing Algorithms with Applications to Sparse and Low Rank Regularization
Inspired by such algorithms, in this chapter we study a first-order method for solving certain convex optimization problems. We focus on problems of the formmin {f(x) + g(Ax) + ω(x) : x ∈ H} (3.1) over a real Hilbert spaceH. We assume that f is a convex function with Hölder continuous gradient, g a Lipschitz continuous convex function, A a bounded linear operator, and ω a convex function defined over a bounded domain. We also assume that the computational operations available are the gradient of f , the proximity operator of g, and a subgradient of the convex conjugate ω∗.1 A particularly common type of problems covered by (3.1) ismin {f(x) + g(Ax) : x ∈ C} , (3.2)where C is a bounded, closed, convex subset of H. Common among such examples are regularization problems with one or more penalties in the objective (as the term g ◦A) and one penalty as a constraint described by C.
Hybrid Conditional Gradient - Smoothing Algorithms with Applications to Sparse and Low Rank Regularization
We study a hybrid conditional gradient - smoothing algorithm (HCGS) for solving composite convex optimization problems which contain several terms over a bounded set. Examples of these include regularization problems with several norms as penalties and a norm constraint. HCGS extends conditional gradient methods to cases with multiple nonsmooth terms, in which standard conditional gradient methods may be difficult to apply. The HCGS algorithm borrows techniques from smoothing proximal methods and requires first-order computations (subgradients and proximity operations). Unlike proximal methods, HCGS benefits from the advantages of conditional gradient methods, which render it more efficient on certain large scale optimization problems. We demonstrate these advantages with simulations on two matrix optimization problems: regularization of matrices with combined \\(\\ell_1\\) and trace norm penalties; and a convex relaxation of sparse PCA.
Learning Tensors in Reproducing Kernel Hilbert Spaces with Multilinear Spectral Penalties
We present a general framework to learn functions in tensor product reproducing kernel Hilbert spaces (TP-RKHSs). The methodology is based on a novel representer theorem suitable for existing as well as new spectral penalties for tensors. When the functions in the TP-RKHS are defined on the Cartesian product of finite discrete sets, in particular, our main problem formulation admits as a special case existing tensor completion problems. Other special cases include transfer learning with multimodal side information and multilinear multitask learning. For the latter case, our kernel-based view is instrumental to derive nonlinear extensions of existing model classes. We give a novel algorithm and show in experiments the usefulness of the proposed extensions.
CD66b−CD64dimCD115− cells in the human bone marrow represent neutrophil-committed progenitors
Here we report the identification of human CD66b−CD64dimCD115− neutrophil-committed progenitor cells (NCPs) within the SSCloCD45dimCD34+ and CD34dim/− subsets in the bone marrow. NCPs were either CD45RA+ or CD45RA−, and in vitro experiments showed that CD45RA acquisition was not mandatory for their maturation process. NCPs exclusively generated human CD66b+ neutrophils in both in vitro differentiation and in vivo adoptive transfer experiments. Single-cell RNA-sequencing analysis indicated NCPs fell into four clusters, characterized by different maturation stages and distributed along two differentiation routes. One of the clusters was characterized by an interferon-stimulated gene signature, consistent with the reported expansion of peripheral mature neutrophil subsets that express interferon-stimulated genes in diseased individuals. Finally, comparison of transcriptomic and phenotypic profiles indicated NCPs represented earlier neutrophil precursors than the previously described early neutrophil progenitors (eNePs), proNeus and COVID-19 proNeus. Altogether, our data shed light on the very early phases of neutrophil ontogeny.Cassatella and colleagues identify CD66b−CD64dimCD115− cells in the human bone marrow as the earliest neutrophil-committed progenitor cells described to date.
The slan antigen identifies the prototypical non-classical CD16+-monocytes in human blood
Peripheral monocytes in humans are conventionally divided into classical (CL, CD14 CD16 ), intermediate (INT, CD14 CD16 ) and non-classical (NC, CD14 CD16 ) cells, based on their expression levels of CD14 and CD16. A major fraction of the NC-monocytes has been shown to express the 6-sulfo LacNAc (slan) antigen, but whether these slan /NC-monocytes represent the prototypical non-classical monocytes or whether they are simply a sub-fraction with identical features as the remainder of NC monocytes is still unclear. We analyzed transcriptome (by bulk and single cell RNA-seq), proteome, cell surface markers and production of discrete cytokines by peripheral slan /NC- and slan /NC-monocytes, in comparison to total NC-, CL- and INT- monocytes. By bulk RNA-seq and proteomic analysis, we found that slan /NC-monocytes express higher levels of genes and proteins specific of NC-monocytes than slan /NC-monocytes do. Unsupervised clustering of scRNA-seq data generated one cluster of NC- and one of INT-monocytes, where all slan /NC-monocytes were allocated to the NC-monocyte cluster, while slan /NC-monocytes were found, in part (13.4%), within the INT-monocyte cluster. In addition, total NC- and slan /NC-monocytes, but not slan /NC-monocytes, were found by both bulk RNA-seq and scRNA-seq to contain a small percentage of natural killer cells. In addition to comparatively characterize total NC-, slan /NC- and slan /NC-monocyte transcriptomes and proteomes, our data prove that slan /NC-, but not slan /NC-, monocytes are more representative of prototypical NC-monocytes.
CO2 Photoreduction Under Visible Light by TiO2 and Carbon Dots Derived from Pyrolized Bio‐Oil
Herein, we report a study on pyrolysis bio‐oil upgrading from leather shaving waste to dope in situ titania (TiO2) with carbon dots (cds). The cds doped TiO2 exhibits remarkable activity as photocatalyst under solar light for the direct conversion of carbon dioxide (CO2) and water vapor (H2O) to methane (CH4). Morover, the catalytic activity also increased under uv radiation. This communication demonstrates the possibility to efficiently exploit the liquid bio‐oil fraction from leather shaving waste to upcycle carbon dioxide. In fact, leather shaving waste bio‐oil is an effective bioproduct to prepare carbon dots that were used to dope titania. The new photocatalysts were effectively used in CO2 photoreduction under visible light.