Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
25,156
result(s) for
"Machine learning -- Mathematical models"
Sort by:
Learning and decision-making from rank data
The ubiquitous challenge of learning and decision-making from rank data arises in situations where intelligent systems collect preference and behavior data from humans, learn from the data, and then use the data to help humans make efficient, effective, and timely decisions. Often, such data are represented by rankings. This book surveys some recent progress toward addressing the challenge from the considerations of statistics, computation, and socio-economics. We will cover classical statistical models for rank data, including random utility models, distance-based models, and mixture models. We will discuss and compare classical and state-of-the-art algorithms, such as algorithms based on Minorize-Majorization (MM), Expectation-Maximization (EM), Generalized Method-of-Moments (GMM), rank breaking, and tensor decomposition. We will also introduce principled Bayesian preference elicitation frameworks for collecting rank data. Finally, we will examine socio-economic aspects of statistically desirable decision-making mechanisms, such as Bayesian estimators. This book can be useful in three ways: (1) for theoreticians in statistics and machine learning to better understand the considerations and caveats of learning from rank data, compared to learning from other types of data, especially cardinal data; (2) for practitioners to apply algorithms covered by the book for sampling, learning, and aggregation; and (3) as a textbook for graduate students or advanced undergraduate students to learn about the field. This book requires that the reader has basic knowledge in probability, statistics, and algorithms. Knowledge in social choice would also help but is not required.
Gaussian Processes for Machine Learning
by
Williams, Christopher K. I
,
Rasmussen, Carl Edward
in
Artificial intelligence
,
Computer Science
,
Computing and Information Technology
2005,2006
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.
Computational trust models and machine learning
\"This book provides an introduction to computational trust models from a machine learning perspective. After reviewing traditional computational trust models, it discusses a new trend of applying formerly unused machine learning methodologies, such as supervised learning. The application of various learning algorithms, such as linear regression, matrix decomposition, and decision trees, illustrates how to translate the trust modeling problem into a (supervised) learning problem. The book also shows how novel machine learning techniques can improve the accuracy of trust assessment compared to traditional approaches\"-- Provided by publisher.
Machine learning : a Bayesian and optimization perspective
by
Theodoridis, Sergios
in
Bayesian statistical decision theory
,
Machine learning
,
Machine learning -- Mathematical models
2015
This book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth. Topics include: all major classical techniques: mean/least-squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods; latest trends; case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, and how the theory can be applied. --
Advances in learning theory : methods, models and applications
by
NATO Advanced Study Institute on Learning Theory and Practice
,
Suykens, Johan A. K.
in
Computational learning theory
,
Congresses
,
Machine learning
2003
This text details advances in learning theory that relate to problems studied in neural networks, machine learning, mathematics and statistics.
Machine Learning
Machine Learning: A Bayesian and Optimization Perspective, Second Edition, gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches based on optimization techniques combined with the Bayesian inference approach. The book builds from the basic classical methods to recent trends, making it suitable for different courses, including pattern recognition, statistical/adaptive signal processing, and statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models. In addition, sections cover major machine learning methods developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth and supported by examples and problems, giving an invaluable resource to both the student and researcher for understanding and applying machine learning concepts. This updated edition includes many more simple examples on basic theory, complete rewrites of the chapter on Neural Networks and Deep Learning, and expanded treatment of Bayesian learning, including Nonparametric Bayesian Learning.
Mastering TensorFlow 1. x
We cover advanced deep learning concepts (such as transfer learning, generative adversarial models, and reinforcement learning), and implement them using TensorFlow and Keras. We cover how to build and deploy at scale with distributed models. You will learn to build TensorFlow models using R, Keras, TensorFlow Learn, TensorFlow Slim and Sonnet.
Dataset Shift in Machine Learning
2008,2009
Dataset shift is a common problem in predictive modeling that occurs when the joint distribution of inputs and outputs differs between training and test stages. Covariate shift, a particular case of dataset shift, occurs when only the input distribution changes. Dataset shift is present in most practical applications, for reasons ranging from the bias introduced by experimental design to the irreproducibility of the testing conditions at training time. (An example is -email spam filtering, which may fail to recognize spam that differs in form from the spam the automatic filter has been built on.) Despite this, and despite the attention given to the apparently similar problems of semi-supervised learning and active learning, dataset shift has received relatively little attention in the machine learning community until recently. This volume offers an overview of current efforts to deal with dataset and covariate shift. The chapters offer a mathematical and philosophical introduction to the problem, place dataset shift in relationship to transfer learning, transduction, local learning, active learning, and semi-supervised learning, provide theoretical views of dataset and covariate shift (including decision theoretic and Bayesian perspectives), and present algorithms for covariate shift. Contributors [cut for catalog if necessary]Shai Ben-David, Steffen Bickel, Karsten Borgwardt, Michael Brückner, David Corfield, Amir Globerson, Arthur Gretton, Lars Kai Hansen, Matthias Hein, Jiayuan Huang, Choon Hui Teo, Takafumi Kanamori, Klaus-Robert Müller, Sam Roweis, Neil Rubens, Tobias Scheffer, Marcel Schmittfull, Bernhard Schölkopf Hidetoshi Shimodaira, Alex Smola, Amos Storkey, Masashi Sugiyama
Optimization for Machine Learning
by
Wright, Stephen J.
,
Nowozin, Sebastian
,
Sra, Suvrit
in
Artificial Intelligence
,
Computer Science
,
Machine learning
2012
The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.
Digital forensic analysis of intelligent and smart IoT devices
by
Jo, Wooyeon
,
Shon, Taeshik
,
Kim, Minju
in
Advancements in Mathematical Models for Adversarial Machine Learning
,
Artificial intelligence
,
Compilers
2023
AI is combined with various devices to provide improved performance. IoT devices combined with AI are called smart IoT. Smart IoT devices can be controlled using wearable devices. Wearable devices such as smartwatches and smartbands generate personal information through sensors to provide a range of services to users. As the generated data are preserved in the storage of the wearable device, getting access to these data from the device can prove useful in criminal investigations. We, therefore, propose a forensic model based on direct connections using wireless or interfaces beyond indirect forensics for wearable devices. The forensic model was derived based on the ecosystem of wearable devices and was divided into logical and physical forensic methods. To confirm the applicability of the forensic model, we applied it to wearable devices from Samsung, Apple, and Garmin. Our results demonstrate that the proposed forensic model can be successfully used to derive artifacts.
Journal Article