Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,302 result(s) for "Kernel method"
Sort by:
Deep Kernel for Genomic and Near Infrared Predictions in Multi-environment Breeding Trials
Kernel methods are flexible and easy to interpret and have been successfully used in genomic-enabled prediction of various plant species. Kernel methods used in genomic prediction comprise the linear genomic best linear unbiased predictor (GBLUP or GB) kernel, and the Gaussian kernel (GK). In general, these kernels have been used with two statistical models: single-environment and genomic × environment (GE) models. Recently near infrared spectroscopy (NIR) has been used as an inexpensive and non-destructive high-throughput phenotyping method for predicting unobserved line performance in plant breeding trials. In this study, we used a non-linear arc-cosine kernel (AK) that emulates deep learning artificial neural networks. We compared AK prediction accuracy with the prediction accuracy of GB and GK kernel methods in four genomic data sets, one of which also includes pedigree and NIR information. Results show that for all four data sets, AK and GK kernels achieved higher prediction accuracy than the linear GB kernel for the single-environment and GE multi-environment models. In addition, AK achieved similar or slightly higher prediction accuracy than the GK kernel. For all data sets, the GE model achieved higher prediction accuracy than the single-environment model. For the data set that includes pedigree, markers and NIR, results show that the NIR wavelength alone achieved lower prediction accuracy than the genomic information alone; however, the pedigree plus NIR information achieved only slightly lower prediction accuracy than the marker plus the NIR high-throughput data.
JUST INTERPOLATE
In the absence of explicit regularization, Kernel “Ridgeless” Regression with nonlinear kernels has the potential to fit the training data perfectly. It has been observed empirically, however, that such interpolated solutions can still generalize well on test data. We isolate a phenomenon of implicit regularization for minimum-norm interpolated solutions which is due to a combination of high dimensionality of the input data, curvature of the kernel function and favorable geometric properties of the data such as an eigenvalue decay of the empirical covariance and kernel matrices. In addition to deriving a data-dependent upper bound on the out-of-sample error, we present experimental evidence suggesting that the phenomenon occurs in the MNIST dataset.
Analyzing three-dimensional wave propagation with the hybrid reproducing kernel particle method based on the dimension splitting method
By introducing the dimension splitting method into the reproducing kernel particle method (RKPM), a hybrid reproducing kernel particle method (HRKPM) for solving three-dimensional (3D) wave propagation problems is presented in this paper. Compared with the RKPM of 3D problems, the HRKPM needs only solving a set of two-dimensional (2D) problems in some subdomains, rather than solving a 3D problem in the 3D problem domain. The shape functions of 2D problems are much simpler than those of 3D problems, which results in that the HRKPM can save the CPU time greatly. Four numerical examples are selected to verify the validity and advantages of the proposed method. In addition, the error analysis and convergence of the proposed method are investigated. From the numerical results we can know that the HRKPM has higher computational efficiency than the RKPM and the element-free Galerkin method.
Rate-Distortion Bounds for Kernel-Based Distortion Measures
Kernel methods have been used for turning linear learning algorithms into nonlinear ones. These nonlinear algorithms measure distances between data points by the distance in the kernel-induced feature space. In lossy data compression, the optimal tradeoff between the number of quantized points and the incurred distortion is characterized by the rate-distortion function. However, the rate-distortion functions associated with distortion measures involving kernel feature mapping have yet to be analyzed. We consider two reconstruction schemes, reconstruction in input space and reconstruction in feature space, and provide bounds to the rate-distortion functions for these schemes. Comparison of the derived bounds to the quantizer performance obtained by the kernel K -means method suggests that the rate-distortion bounds for input space and feature space reconstructions are informative at low and high distortion levels, respectively.
Two Implicit Meshless Finite Point Schemes for the Two-Dimensional Distributed-Order Fractional Equation
In this paper, the distributed-order time fractional sub-diffusion equation on the bounded domains is studied by using the finite-point-type meshless method. The finite point method is a point collocation based method which is truly meshless and computationally efficient. To construct the shape functions of the finite point method, the moving least square reproducing kernel approximation is employed. Two implicit discretisation of order and are derived, respectively. Stability and norm convergence of the obtained difference schemes are proved. Numerical examples are provided to confirm the theoretical results.
Decentralised one‐class kernel classification‐based damage detection and localisation
Summary In this paper, a data‐based damage detection algorithm that uses a novel one‐class kernel classifier for detection and localisation of damage is presented. The demands of wireless sensing are carefully considered in the development of this fully decentralised and automated methodology. The one‐class kernel classifier proposed in this paper is trained through a faster and simpler to implement iterative procedure than other kernel classification methods, while retaining the same advantages over parametric methods, making it especially attractive for embedded damage detection. Acceleration time series at each sensor location are processed into autoregressive and continuous wavelet transform‐based damage‐sensitive features. Baseline values of these features are used to train the classifier, which can then classify features from new tests as damaged or undamaged, as well as outputting a localisation index, which can be used to identify the location of damage in the structure. This methodology is evaluated using acceleration data taken from a steel‐frame laboratory structure under various damage scenarios. A number of parametric studies are also conducted to investigate the effect of sampling frequency and baseline data sample size. Copyright © 2016 John Wiley & Sons, Ltd.
Range and Habitat Selection of African Buffalo in South Africa
We used more than 10 years of data on buffalo herds in a Geographic Information System (GIS) of Klaserie Private Nature Reserve (KPNR) to examine ranging behavior and habitat selection at multiple temporal and geographic scales. We compared 3 methods of empirical home range estimation: minimum convex polygons (MCP); a fixed-kernel method; and a new local nearest-neighbor convex-hull construction method (LoCoH). For 3 herds over 5 years (1995–2000), the southern herd (SH) had the largest range, the focal study herd (FH) had the intermediate range, and the northern herd (NH) had the smallest range. The LoCoH method best-described the ranges because it accommodated user knowledge of known physical barriers, such as fences, whereas the MCP and kernel methods overestimated ranges. Short-term ranges of the FH over 9 years reveal that buffalo travel farther and range wider in the dry season than the wet. Habitat selection analyses on broad vegetation categories showed preference for Acacia shrub veld and Combretum-dominated woodlands. We found no significant selection of habitat at a fine geographic and temporal interval using the remotely sensed normalized difference vegetation index (NDVI), but the index was correlated to ranging behavior at a larger geographic scale. We found that buffalo selected areas within 1 km of water sources, and an isopleth analysis using the new LoCoH method showed preference for riverine areas in both seasons. This suggests that buffalo preferentially select for areas near water, but they may range farther in the dry season for higher-quality food. As KPNR has a higher density of water than the neighboring Kruger National Park (KNP), this study provides a comparison of buffalo response to water availability in a smaller reserve and important information to managing the buffalo population as part of the larger Greater Kruger Management Area (GKMA).
A Hybrid Reproducing Kernel Particle Method for Three-Dimensional Helmholtz Equation
The reproducing kernel particle method (RKPM) is one of the most universal meshless methods. However, when solving three-dimensional (3D) problems, the computational efficiency is relatively low because of the complexity of the shape function. To overcome this disadvantage, in this study, we introduced the dimension splitting method into the RKPM to present a hybrid reproducing kernel particle method (HRKPM), and the 3D Helmholtz equation is solved. The 3D Helmholtz equation is transformed into a series of related two-dimensional (2D) ones, in which the 2D RKPM shape function is used, and the Galerkin weak form of these 2D problems is applied to obtain the discretized equations. In the dimension-splitting direction, the difference method is used to combine the discretized equations in all 2D domains. Three example problems are given to illustrate the performance of the HRKPM. Moreover, the numerical results show that the HRKPM can improve the computational efficiency of the RKPM significantly.
On the Effect of Bias Estimation on Coverage Accuracy in Nonparametric Inference
Nonparametric methods play a central role in modern empirical work. While they provide inference procedures that are more robust to parametric misspecification bias, they may be quite sensitive to tuning parameter choices. We study the effects of bias correction on confidence interval coverage in the context of kernel density and local polynomial regression estimation, and prove that bias correction can be preferred to undersmoothing for minimizing coverage error and increasing robustness to tuning parameter choice. This is achieved using a novel, yet simple, Studentization, which leads to a new way of constructing kernel-based bias-corrected confidence intervals. In addition, for practical cases, we derive coverage error optimal bandwidths and discuss easy-to-implement bandwidth selectors. For interior points, we show that the mean-squared error (MSE)-optimal bandwidth for the original point estimator (before bias correction) delivers the fastest coverage error decay rate after bias correction when second-order (equivalent) kernels are employed, but is otherwise suboptimal because it is too \"large.\" Finally, for odd-degree local polynomial regression, we show that, as with point estimation, coverage error adapts to boundary points automatically when appropriate Studentization is used; however, the MSE-optimal bandwidth for the original point estimator is suboptimal. All the results are established using valid Edgeworth expansions and illustrated with simulated data. Our findings have important consequences for empirical work as they indicate that bias-corrected confidence intervals, coupled with appropriate standard errors, have smaller coverage error and are less sensitive to tuning parameter choices in practically relevant cases where additional smoothness is available. Supplementary materials for this article are available online.
RANDOMIZED SKETCHES FOR KERNELS: FAST AND OPTIMAL NONPARAMETRIC REGRESSION
Kernel ridge regression (KRR) is a standard method for performing nonparametric regression over reproducing kernel Hilbert spaces. Given n samples, the time and space complexity of computing the KRR estimate scale as 𝓞(n3) and 𝓞(n2), respectively, and so is prohibitive in many cases. We propose approximations of KRR based on m-dimensional randomized sketches of the kernel matrix, and study how small the projection dimension m can be chosen while still preserving minimax optimality of the approximate KRR estimate. For various classes of randomized sketches, including those based on Gaussian and randomized Hadamard matrices, we prove that it suffices to choose the sketch dimension m proportional to the statistical dimension (modulo logarithmic factors). Thus, we obtain fast and minimax optimal approximations to the KRR estimate for nonparametric regression. In doing so, we prove a novel lower bound on the minimax risk of kernel regression in terms of the localized Rademacher complexity.