Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
12,649 result(s) for "lasso"
Sort by:
Believe : the untold story behind Ted Lasso, the show that kicked its way into our hearts
\"The definitive book on the TV show Ted Lasso, written by New York Times journalist and editor Jeremy Egner, celebrating the show's improbable rise and cultural impact while never losing sight of the heart, friendship, and passion that have made it an enduring favorite for the ages When Ted Lasso first aired in 2020, nobody-including those who had worked on it-knew how a show inspired by an ad, centered around soccer, filled mostly with unknown actors, and led by a wondrously mustached \"nice guy\" would be received. Now, eleven Emmys and one Peabody Award later, it's safe to say that the show's status as a pop culture phenomenon is secure. And, for the first time, New York Times television editor Jeremy Egner explores the creation, production, and potent legacy of Ted Lasso. Drawing on dozens of interviews from key cast, creators, and more, Believe takes readers from the very first, silly NBC Premier League commercial to the pitch to Apple executives, then into the show's writer's room, through the brilliant international casting, and on to the unforgettable set and locations of the show itself. Egner approaches his reporting as a journalist and as a cultural critic, but also with an affection and admiration fans will appreciate, carefully and humorously telling Ted Lasso's story of teamwork, of hidden talent, of a group of friends looking around at the world's increasingly nasty discourse and deciding that maybe simple decency still had the power to bring us together-a story about what happens when you dare to believe\"-- Provided by publisher.
PROGRAM EVALUATION AND CAUSAL INFERENCE WITH HIGH-DIMENSIONAL DATA
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data-rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function-valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced-form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for postregularization and post-selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reducedform functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment-condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function-valued) parameters within this general framework where any high-quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high-dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsitybased estimation of regression functions for function-valued outcomes.
The legendary lasso
The Lasso of Truth is one of the many weapons and trophies that fill the halls of Paradise Island's palace, and it is one of Wonder Woman's favorite tools, which she used to defeat her ancient enemy, Circe.
joint graphical lasso for inverse covariance estimation across multiple classes
We consider the problem of estimating multiple related Gaussian graphical models from a high dimensional data set with observations belonging to distinct classes. We propose the joint graphical lasso, which borrows strength across the classes to estimate multiple graphical models that share certain characteristics, such as the locations or weights of non‐zero edges. Our approach is based on maximizing a penalized log‐likelihood. We employ generalized fused lasso or group lasso penalties and implement a fast alternating directions method of multipliers algorithm to solve the corresponding convex optimization problems. The performance of the method proposed is illustrated through simulated and real data examples.
Improved Acoustic Emission Tomography Algorithm Based on Lasso Regression
This study developed a novel acoustic emission (AE) tomography algorithm for non-destructive testing (NDT) based on Lasso regression (LASSO). The conventional AE tomography method takes considerable measurement data to obtain the elastic velocity distribution for structure evaluation. However, the new algorithm in which the LASSO algorithm is applied to AE tomography eliminates these deficiencies and reconstructs equivalent velocity distribution with fewer event data to describe the defected range. Three numerical simulation models were studied to reveal the capacity of the proposed method, and the functional performance was verified by three different types of classical concrete damage numerical simulation models and compared to that of the conventional SIRT algorithm in the experiment. Finally, this study demonstrates that the LASSO algorithm can be applied in AE tomography, and the shadow parts are eliminated in resultant elastic velocity distributions with fewer measurement paths.
SPARSE MODELS AND METHODS FOR OPTIMAL INSTRUMENTS WITH AN APPLICATION TO EMINENT DOMAIN
We develop results for the use of Lasso and post-Lasso methods to form first-stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post-Lasso in the first stage is root-n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well-approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic \"beta-min\" conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso-based IV estimator with a data-driven penalty performs well compared to recently advocated many-instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso-based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post-Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non-Gaussian, heteroscedastic disturbances that uses a data-weighted 𝓁₁-penalty function. By innovatively using moderate deviation theory for self-normalized sums, we provide convergence rates for the resulting Lasso and post-Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that log p = o(n 1/3 ). We also provide a data-driven method for choosing the penalty level that must be specified in obtaining Lasso and post-Lasso estimates and establish its asymptotic validity under non-Gaussian, heteroscedastic disturbances.
SUPPORT UNION RECOVERY IN HIGH-DIMENSIONAL MULTIVARIATE REGRESSION
In multivariate regression, a K -dimensional response vector is regressed upon a common set of p covariates, with a matrix B* ∈ ℝ p×K of regression coefficients. We study the behavior of the multivariate group Lasso, in which block regularization based on the 𝓁₁/𝓁₂ norm is used for support union recovery, or recovery of the set of s rows for which B* is nonzero. Under high-dimensional scaling, we show that the multivariate group Lasso exhibits a threshold for the recovery of the exact row pattern with high probability over the random design and noise that is specified by the sample complexity parameter θ(n, p, s):=n/[2ψ(B*) log(p − s)]. Here n is the sample size, and ψ(B*) is a sparsity-overlap function measuring a combination of the sparsities and overlaps of the K -regression coefficient vectors that constitute the model. We prove that the multivariate group Lasso succeeds for problem sequences (n, p, s) such that θ(n, p, s) exceeds a critical level θ u , and fails for sequences such that θ(n, p, s) lies below a critical level θ 𝓁 . For the special case of the standard Gaussian ensemble, we show that θ 𝓁 = θ u so that the characterization is sharp. The sparsity-overlap function ψ(B*) reveals that, if the design is uncorrelated on the active rows, 𝓁₁/𝓁₂ regularization for multivariate regression never harms performance relative to an ordinary Lasso approach and can yield substantial improvements in sample complexity (up to a factor of K) when the coefficient vectors are suitably orthogonal. For more general designs, it is possible for the ordinary Lasso to outperform the multivariate group Lasso. We complement our analysis with simulations that demonstrate the sharpness of our theoretical results, even for relatively small problems.
Least squares after model selection in high-dimensional sparse models
In this article we study post-model selection estimators that apply ordinary least squares (OLS) to the model selected by first-step penalized estimators, typically Lasso. It is well known that Lasso can estimate the nonparametric regression function at nearly the oracle rate, and is thus hard to improve upon. We show that the OLS post-Lasso estimator performs at least as well as Lasso in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the Lasso-based model selection \"fails\" in the sense of missing some components of the \"true\" regression model. By the \"true\" model, we mean the best s-dimensional approximation to the nonparametric regression function chosen by the oracle. Furthermore, OLS post-Lasso estimator can perform strictly better than Lasso, in the sense of a strictly faster rate of convergence, if the Lasso-based model selection correctly includes all components of the \"true\" model as a subset and also achieves sufficient sparsity. In the extreme case, when Lasso perfectly selects the \"true\" model, the OLS post-Lasso estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by Lasso, which guarantees that this dimension is at most of the same order as the dimension of the \"true\" model. Our rate results are nonasymptotic and hold in both parametric and nonparametric models. Moreover, our analysis is not limited to the Lasso estimator acting as a selector in the first step, but also applies to any other estimator, for example, various forms of thresholded Lasso, with good rates and good sparsity properties. Our analysis covers both traditional thresholding and a new practical, data-driven thresholding scheme that induces additional sparsity subject to maintaining a certain goodness of fit. The latter scheme has theoretical guarantees similar to those of Lasso or OLS post-Lasso, but it dominates those procedures as well as traditional thresholding in a wide variety of experiments.
A Unified Framework for High-Dimensional Analysis of M-Estimators with Decomposable Regularizers
High-dimensional statistical inference deals with models in which the the number of parameters ñ is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of low-dimensional structure, including sparse vectors, sparse and structured matrices, low-rank matrices and combinations thereof. In such settings, a general approach to estimation is to solve a regularized optimization problem, which combines a loss function measuring how well the model fits the data with some regularization function that encourages the assumed structure. This paper provides a unified framework for establishing consistency and convergence rates for such regularized Af-estimators under highdimensional scaling. We state one main theorem and show how it can be used to re-derive some existing results, and also to obtain a number of new results on consistency and convergence rates, in both ℓ₂-error and related norms. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure corresponding regularized M-estimators have fast convergence rates and which are optimal in many well-studied cases.
DEGREES OF FREEDOM IN LASSO PROBLEMS
We derive the degrees of freedom of the lasso fit, placing no assumptions on the predictor matrix X. Like the well-known result of Zou, Hastie and Tibshirani [Ann. Statist. 35 (2007) 2173-2192], which gives the degrees of freedom of the lasso fit when X has full column rank, we express our result in terms of the active set of a lasso solution. We extend this result to cover the degrees of freedom of the generalized lasso fit for an arbitrary predictor matrix X (and an arbitrary penalty matrix D). Though our focus is degrees of freedom, we establish some intermediate results on the lasso and generalized lasso that may be interesting on their own.