Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
61 result(s) for "Prior loss function"
Sort by:
Accurate prediction of drug-protein interactions by maintaining the original topological relationships among embeddings
Background Learning-based methods have recently demonstrated strong potential in predicting drug-protein interactions (DPIs). However, existing approaches often fail to achieve accurate predictions on real-world imbalanced datasets while maintaining high generalizability and scalability, limiting their practical applicability. Results This study proposes a highly generalized model, GLDPI, aimed at improving prediction accuracy in imbalanced scenarios by preserving the topological relationships among initial molecular representations in the embedding space. Specifically, GLDPI employs dedicated encoders to transform one-dimensional sequence information of drugs and proteins into embedding representations and efficiently calculates the likelihood of DPIs using cosine similarity. Additionally, we introduce a prior loss function based on the guilt-by-association principle to ensure that the topology of the embedding space aligns with the structure of the initial drug-protein network. This design enables GLDPI to effectively capture network relationships and key features of molecular interactions, thereby significantly enhancing predictive performance. Conclusions Experimental results highlight GLDPI’s superior performance on multiple highly imbalanced benchmark datasets, achieving over a 100% improvement in the AUPR metric compared to state-of-the-art methods. Additionally, GLDPI demonstrates exceptional generalization capabilities in cold-start experiments, excelling in predicting novel drug-protein interactions. Furthermore, the model exhibits remarkable scalability, efficiently inferring approximately 1.2 × 10 10 drug-protein pairs in less than 10 h.
Reliability Estimation of Weibull -Exponential Distribution via Bayesian Approach
Bayesian estimation is employed in order to estimate the reliability function of Weibull-Exponential distribution by using different priors. The Bayes estimators of the reliability function have been obtained under square error, precautionary and entropy loss function
Statistically Optimal Cue Integration During Human Spatial Navigation
In 2007, Cheng and colleagues published their influential review wherein they analyzed the literature on spatial cue interaction during navigation through a Bayesian lens, and concluded that models of optimal cue integration often applied in psychophysical studies could explain cue interaction during navigation. Since then, numerous empirical investigations have been conducted to assess the degree to which human navigators are optimal when integrating multiple spatial cues during a variety of navigation-related tasks. In the current review, we discuss the literature on human cue integration during navigation that has been published since Cheng et al.’s original review. Evidence from most studies demonstrate optimal navigation behavior when humans are presented with multiple spatial cues. However, applications of optimal cue integration models vary in their underlying assumptions (e.g., uninformative priors and decision rules). Furthermore, cue integration behavior depends in part on the nature of the cues being integrated and the navigational task (e.g., homing versus non-home goal localization). We discuss the implications of these models and suggest directions for future research.
Decoupling Shrinkage and Selection in Bayesian Linear Models: A Posterior Summary Perspective
Selecting a subset of variables for linear models remains an active area of research. This article reviews many of the recent contributions to the Bayesian model selection and shrinkage prior literature. A posterior variable selection summary is proposed, which distills a full posterior distribution over regression coefficients into a sequence of sparse linear predictors.
Image inpainting based on fusion structure information and pixelwise attention
Image inpainting refers to restoring the damaged areas of an image using the remaining available information. In recent years, deep learning-based image inpainting has been extensively explored and shown remarkable performance, among which the parallel prior embedding methods have the advantages of few network parameters and relatively low training difficulty. However, most methods use a single prior that is unable to provide sufficient guidance information. Hence, they are unable to generate high-quality, realistic, and vivid images. Fusion labels are effective priors that could provide more meaningful guidance information for inpainting. Meanwhile, attention mechanisms can focus on effective features and establish long-range correlations, which is helpful to refine texture details. Therefore, this paper proposes a parallel prior embedding image inpainting method based on fusion structure information (FSI) and pixelwise attention. A FSI module using the color structure and edge information is designed to update structure features and image features alternately, pixelwise attention is utilized to refine image details, and a joint loss is applied to constrain model training. Extensive experiments are conducted on multiple public datasets, and the results show that the proposed method achieves generally superior performance over several compared methods in terms of several quantitative metrics and qualitative analysis.
LPFFNet: Lightweight Prior Feature Fusion Network for SAR Ship Detection
SAR ship detection is of great significance in marine safety, fisheries management, and maritime traffic. At present, many deep learning-based ship detection methods have improved the detection accuracy but also increased the complexity and computational cost. To address the issue, a lightweight prior feature fusion network (LPFFNet) is proposed to better improve the performance of SAR ship detection. A perception lightweight backbone network (PLBNet) is designed to reduce model complexity, and a multi-channel feature enhancement module (MFEM) is introduced to enhance the SAR ship localization capability. Moreover, a channel prior feature fusion network (CPFFNet) is designed to enhance the perception ability of ships of different sizes. Meanwhile, the residual channel focused attention module (RCFA) and the multi-kernel adaptive pooling local attention network (MKAP-LAN) are integrated to improve feature extraction capability. In addition, the enhanced ghost convolution (EGConv) is used to generate more reliable gradient information. And finally, the detection performance is improved by focusing on difficult samples through a smooth weighted focus loss function (SWF Loss). The experimental results have verified the effectiveness of the proposed model.
The 3-component mixture of power distributions under Bayesian paradigm with application of life span of fatigue fracture
Mixture distributions are naturally extra attractive to model the heterogeneous environment of processes in reliability analysis than simple probability models. This focus of the study is to develop and Bayesian inference on the 3-component mixture of power distributions. Under symmetric and asymmetric loss functions, the Bayes estimators and posterior risk using priors are derived. The presentation of Bayes estimators for various sample sizes and test termination time (a fact of time after that test is terminated) is examined in this article. To assess the performance of Bayes estimators in terms of posterior risks, a Monte Carlo simulation along with real data study is presented.
On ranking climate factors affecting the living organisms based on paired comparison model, a Bayesian approach
The method of paired comparisons (PC) endeavors to rank treatments presented in pairs to panelists (or respondents, judges, jurists, etc.) and they have to select the better one based on sensory evaluations. Sometimes the situations may occur when the panelists cannot discriminate between the treatments and declare a tie. In this study, an effort is made to extend the Weibull PC model to accommodate ties. The extended Weibull PC model is analyzed using Bayesian paradigm. Four different loss functions are used under noninformative (Uniform and Jeffreys) priors. The posterior and marginal posterior distributions are derived. The posterior estimates, posterior risks, preference probabilities, posterior probabilities and predictive probabilities are evaluated to know the ranking of ecological factor. The goodness of the proposed model is assessed. The entire analysis is carried out using a real data set based on the preference for the ecological factors.
Learning with noisy labels via logit adjustment based on gradient prior method
Robust loss functions are crucial for training models with strong generalization capacity in the presence of noisy labels. The commonly used Cross Entropy (CE) loss function tends to overfit noisy labels, while symmetric losses that are robust to label noise are limited by their symmetry conditions. We conduct an analysis of the gradient of CE and identify the main difficulty posed by label noise: the imbalance of gradient norm among samples. Inspired by long-tail learning, we propose a gradient prior (GP)-based logit adjustment method to mitigate the impact of label noise. This method makes full use of the gradient of samples to adjust the logit, enabling DNNs to effectively ignore noisy samples and instead focus more on learning hard samples. Experiments on benchmark datasets demonstrate that our method significantly improves the performance of CE and outperforms existing methods, especially in the case of symmetric noise. Experiments on the object detection dataset Pascal VOC further verify the plug-and-play and effective robustness of our method.
The Bayesian Inference of Pareto Models Based on Information Geometry
Bayesian methods have been rapidly developed due to the important role of explicable causality in practical problems. We develope geometric approaches to Bayesian inference of Pareto models, and give an application to the analysis of sea clutter. For Pareto two-parameter model, we show the non-existence of α-parallel prior in general, hence we adopt Jeffreys prior to deal with the Bayesian inference. Considering geodesic distance as the loss function, an estimation in the sense of minimal mean geodesic distance is obtained. Meanwhile, by involving Al-Bayyati’s loss function we gain a new class of Bayesian estimations. In the simulation, for sea clutter, we adopt Pareto model to acquire various types of parameter estimations and the posterior prediction results. Simulation results show the advantages of the Bayesian estimations proposed and the posterior prediction.