Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
3,782
result(s) for
"Probabilistic inference"
Sort by:
Exploring sample preparation and data evaluation strategies for enhanced identification of host cell proteins in drug products of therapeutic antibodies and Fc-fusion proteins
by
Esser-Skala, Wolfgang
,
Segl Marius
,
Holzmann Johann
in
Bevacizumab
,
Comparative analysis
,
Data acquisition
2020
Manufacturing of biopharmaceuticals involves recombinant protein expression in host cells followed by extensive purification of the target protein. Yet, host cell proteins (HCPs) may persist in the final drug product, potentially reducing its quality with respect to safety and efficacy. Consequently, residual HCPs are closely monitored during downstream processing by techniques such as enzyme-linked immunosorbent assay (ELISA) or high-performance liquid chromatography combined with tandem mass spectrometry (HPLC-MS/MS). The latter is especially attractive as it provides information with respect to protein identities. Although the applied HPLC-MS/MS methodologies are frequently optimized with respect to HCP identification, acquired data is typically analyzed using standard settings. Here, we describe an improved strategy for evaluating HPLC-MS/MS data of HCP-derived peptides, involving probabilistic protein inference and peptide detection in the absence of fragment ion spectra. This data analysis workflow was applied to data obtained for drug products of various biotherapeutics upon protein A affinity depletion. The presented data evaluation strategy enabled in-depth comparative analysis of the HCP repertoires identified in drug products of the monoclonal antibodies rituximab and bevacizumab, as well as the fusion protein etanercept. In contrast to commonly applied ELISA strategies, the here presented workflow is process-independent and may be implemented into existing HPLC-MS/MS setups for drug product characterization and process development.
Journal Article
Efficient hybrid rumor mitigation in dynamic and multilayer online social networks
2024
The proliferation of malicious information, including fake news and rumors, within Online Social Networks (OSNs) has prompted considerable research into strategies that mitigate the adverse effects of such content. This study focuses on the problem of minimizing rumor influence in dynamic, multilayer OSNs. Given the rapid evolution of OSNs and their expanding functionalities, we introduce an innovative OSN representation as a dynamic multilayer network, incorporating heterogeneous propagation models across layers to effectively capture the complex structure of OSNs. To address the challenge, we propose a hybrid approach that integrates two strategies: the Node or Link Blocking Strategy (BNLS) and the Truth Campaign Strategy (TCS). This integration allows us to identify an optimal set of nodes for limiting rumor spread through a probabilistic framework grounded in network inference. We introduce a hybrid approach that combines BNLS and TCS for Rumor Influence Minimization, seeking to identify two optimal node sets, K+ and K-, to limit rumor spread and support truth campaigns, respectively. This selection is made under the constraint |K+|+|K-|≤K, where K is a predefined budget. By leveraging the strengths of both strategies, our approach minimizes rumor influence effectively. Our method presents several advantages: it captures (1) the dynamic and multilayered representation of OSNs, (2) the evolving structural properties of networks, and (3) the temporal aspects of rumor propagation. To implement this solution, we develop the Hybrid Greedy Algorithm (HGA), which provides a (1-1/e)-approximation guarantee. Systematic experiments on both synthetic and real-world datasets across single and multilayer networks demonstrate the superior performance of our approach. The results indicate that our hybrid strategy outperforms recent state-of-the-art methods, validating its effectiveness for rumor influence minimization.
Journal Article
A probabilistic inference model for recommender systems
by
Zhu, Kunlei
,
Huang, Jiajin
,
Zhong, Ning
in
Artificial Intelligence
,
Belief networks
,
Collaboration
2016
Recommendation is an important application that is employed on the Web. In this paper, we propose a method for recommending items to a user by extending a probabilistic inference model in information retrieval. We regard the user’s preference as the query, an item as a document, and explicit and implicit factors as index terms. Additional information sources can be added to the probabilistic inference model, particularly belief networks. The proposed method also uses the belief network model to recommend items by combining expert information. Experimental results on real-world data sets show that the proposed method can improve recommendation effectiveness.
Journal Article
Probabilistic programming in Python using PyMC3
by
Salvatier, John
,
Fonnesbeck, Christopher
,
Wiecki, Thomas V.
in
Advantages
,
Algorithms
,
Bayesian analysis
2016
Probabilistic programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamiltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source probabilistic programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other probabilistic programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package.
Journal Article
Inference for Nonprobability Samples
by
Elliott, Michael R.
,
Valliant, Richard
in
Data management
,
Incentives
,
Probabilistic inference
2017
Although selecting a probability sample has been the standard for decades when making inferences from a sample to a finite population, incentives are increasing to use nonprobability samples. In a world of \"big data\", large amounts of data are available that are faster and easier to collect than are probability samples. Design-based inference, in which the distribution for inference is generated by the random mechanism used by the sampler, cannot be used for nonprobability samples. One alternative is quasi-randomization in which pseudo-inclusion probabilities are estimated based on covariates available for samples and nonsample units. Another is superpopulation modeling for the analytic variables collected on the sample units in which the model is used to predict values for the nonsample units. We discuss the pros and cons of each approach.
Journal Article
Correction: The Sapir-Whorf Hypothesis and Probabilistic Inference: Evidence from the Domain of Color
2016
[This corrects the article DOI: 10.1371/journal.pone.0158725.].
Journal Article
Deep generative modeling for single-cell transcriptomics
by
Jordan, Michael I
,
Cole, Michael B
,
Nir Yosef
in
Artificial neural networks
,
Biodiversity
,
Clustering
2018
Single-cell transcriptome measurements can reveal unexplored biological diversity, but they suffer from technical noise and bias that must be modeled to account for the resulting uncertainty in downstream analyses. Here we introduce single-cell variational inference (scVI), a ready-to-use scalable framework for the probabilistic representation and analysis of gene expression in single cells (https://github.com/YosefLab/scVI). scVI uses stochastic optimization and deep neural networks to aggregate information across similar cells and genes and to approximate the distributions that underlie observed expression values, while accounting for batch effects and limited sensitivity. We used scVI for a range of fundamental analysis tasks including batch correction, visualization, clustering, and differential expression, and achieved high accuracy for each task.
Journal Article
A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs
by
Laan, Christopher
,
George, Dileep
,
Lázaro-Gredilla, Miguel
in
Algorithms
,
Artificial intelligence
,
Artificial neural networks
2017
Proving that we are human is now part of many tasks that we do on the internet, such as creating an email account, voting in an online poll, or even downloading a scientific paper. One of the most popular tests is text-based CAPTCHA, where would-be users are asked to decipher letters that may be distorted, partially obscured, or shown against a busy background. This test is used because computers find it tricky, but (most) humans do not. George et al. developed a hierarchical model for computer vision that was able to solve CAPTCHAs with a high accuracy rate using comparatively little training data. The results suggest that moving away from text-based CAPTCHAs, as some online services have done, may be a good idea. Science , this issue p. eaag2612 A hierarchical computer vision model solves CAPTCHAs with a high accuracy rate using relatively little training data. Learning from a few examples and generalizing to markedly different situations are capabilities of human visual intelligence that are yet to be matched by leading machine learning models. By drawing inspiration from systems neuroscience, we introduce a probabilistic generative model for vision in which message-passing–based inference handles recognition, segmentation, and reasoning in a unified way. The model demonstrates excellent generalization and occlusion-reasoning capabilities and outperforms deep neural networks on a challenging scene text recognition benchmark while being 300-fold more data efficient. In addition, the model fundamentally breaks the defense of modern text-based CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) by generatively segmenting characters without CAPTCHA-specific heuristics. Our model emphasizes aspects such as data efficiency and compositionality that may be important in the path toward general artificial intelligence.
Journal Article
CMOS plus stochastic nanomagnets enabling heterogeneous computers for probabilistic inference and learning
by
Kobayashi, Keito
,
Aadit, Navid Anjum
,
Camsari, Kerem Y.
in
639/705/1042
,
639/925/927/1062
,
Algorithms
2024
Extending Moore’s law by augmenting complementary-metal-oxide semiconductor (CMOS) transistors with emerging nanotechnologies (X) has become increasingly important. One important class of problems involve sampling-based Monte Carlo algorithms used in probabilistic machine learning, optimization, and quantum simulation. Here, we combine stochastic magnetic tunnel junction (sMTJ)-based probabilistic bits (p-bits) with Field Programmable Gate Arrays (FPGA) to create an energy-efficient CMOS + X (X = sMTJ) prototype. This setup shows how asynchronously driven CMOS circuits controlled by sMTJs can perform probabilistic inference and learning by leveraging the algorithmic update-order-invariance of Gibbs sampling. We show how the stochasticity of sMTJs can augment low-quality random number generators (RNG). Detailed transistor-level comparisons reveal that sMTJ-based p-bits can replace up to 10,000 CMOS transistors while dissipating two orders of magnitude less energy. Integrated versions of our approach can advance probabilistic computing involving deep Boltzmann machines and other energy-based learning algorithms with extremely high throughput and energy efficiency.
Designing energy-efficient and scalable hardware capable of accelerating Monte Carlo algorithms is highly desirable for probabilistic computing. Here, Singh et al. combine stochastic magnetic tunnel junction-based probabilistic bits with versatile field programmable gate arrays to achieve this goa
Journal Article
Learning action-oriented models through active inference
by
Seth, Anil K.
,
Tschantz, Alexander
,
Buckley, Christopher L.
in
Algorithms
,
Bacterial Physiological Phenomena
,
Behavior
2020
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms.
Journal Article