نتائج البحث

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
تم إضافة الكتاب إلى الرف الخاص بك!
عرض الكتب الموجودة على الرف الخاص بك .
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إضافة العنوان إلى الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
هل أنت متأكد أنك تريد إزالة الكتاب من الرف؟
{{itemTitle}}
{{itemTitle}}
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إزالة العنوان من الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
    منجز
    مرشحات
    إعادة تعيين
  • الضبط
      الضبط
      امسح الكل
      الضبط
  • مُحَكَّمة
      مُحَكَّمة
      امسح الكل
      مُحَكَّمة
  • السلسلة
      السلسلة
      امسح الكل
      السلسلة
  • مستوى القراءة
      مستوى القراءة
      امسح الكل
      مستوى القراءة
  • السنة
      السنة
      امسح الكل
      من:
      -
      إلى:
  • المزيد من المرشحات
      المزيد من المرشحات
      امسح الكل
      المزيد من المرشحات
      نوع المحتوى
    • نوع العنصر
    • لديه النص الكامل
    • الموضوع
    • الناشر
    • المصدر
    • المتبرع
    • اللغة
    • مكان النشر
    • المؤلفين
    • موقع
11,658 نتائج ل "Supervised learning (Machine learning)"
صنف حسب:
Landslide susceptibility prediction based on a semi-supervised multiple-layer perceptron model
Conventional supervised and unsupervised machine learning models used for landslide susceptibility prediction (LSP) have many drawbacks, such as an insufficient number of recorded landslide samples, and the subjective and random selection of non-landslide samples. To overcome these drawbacks, a semi-supervised multiple-layer perceptron (SSMLP) is innovatively proposed with several processes: (1) an initial landslide susceptibility map (LSM) is produced using the multiple-layer perceptron (MLP) based on the original recorded landslide samples and related environmental factors; (2) the initial LSM is respectively classified into five areas with very high, high, moderate, low and very low susceptible levels; (3) some reasonable grid units from the areas with very high susceptible level are selected as new landslide samples to expand the original landslide samples; (4) reasonable non-landslide samples are selected from the areas with very low susceptible level; and (5) the expanded landslide samples, reasonable selected non-landslide samples and related environmental factors are put into the MLP once again to predict the final LSM. The Xunwu County of Jiangxi Province in China is selected as the study area. Conventional supervised machine learning (i.e. MLP) and unsupervised machine learning (i.e. K-means clustering model) are selected for comparisons. The comparative results indicate that the SSMLP model has a considerably higher LSP performance than the MLP and K-means clustering in Xunwu County. The SSMLP model successfully addresses the drawbacks existed in the conventional machine learning for LSP.
Boosting
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate \"rules of thumb.\" A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical. This book, written by the inventors of the method, brings together, organizes, simplifies, and substantially extends two decades of research on boosting, presenting both theory and applications in a way that is accessible to readers from diverse backgrounds while also providing an authoritative reference for advanced researchers. With its introductory treatment of all material and its inclusion of exercises in every chapter, the book is appropriate for course use as well. The book begins with a general introduction to machine learning algorithms and their analysis; then explores the core theory of boosting, especially its ability to generalize; examines some of the myriad other theoretical viewpoints that help to explain and understand boosting; provides practical extensions of boosting for more complex learning problems; and finally presents a number of advanced theoretical topics. Numerous applications and practical illustrations are offered throughout.
Mastering the game of Go without human knowledge
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa , superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa , our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo. Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games. AlphaGo Zero goes solo To beat world champions at the game of Go, the computer program AlphaGo has relied largely on supervised learning from millions of human expert moves. David Silver and colleagues have now produced a system called AlphaGo Zero, which is based purely on reinforcement learning and learns solely from self-play. Starting from random moves, it can reach superhuman level in just a couple of days of training and five million games of self-play, and can now beat all previous versions of AlphaGo. Because the machine independently discovers the same fundamental principles of the game that took humans millennia to conceptualize, the work suggests that such principles have some universal character, beyond human bias.
Practical guide to SHAP analysis: Explaining supervised machine learning model predictions in drug development
Despite increasing interest in using Artificial Intelligence (AI) and Machine Learning (ML) models for drug development, effectively interpreting their predictions remains a challenge, which limits their impact on clinical decisions. We address this issue by providing a practical guide to SHapley Additive exPlanations (SHAP), a popular feature‐based interpretability method, which can be seamlessly integrated into supervised ML models to gain a deeper understanding of their predictions, thereby enhancing their transparency and trustworthiness. This tutorial focuses on the application of SHAP analysis to standard ML black‐box models for regression and classification problems. We provide an overview of various visualization plots and their interpretation, available software for implementing SHAP, and highlight best practices, as well as special considerations, when dealing with binary endpoints and time‐series models. To enhance the reader's understanding for the method, we also apply it to inherently explainable regression models. Finally, we discuss the limitations and ongoing advancements aimed at tackling the current drawbacks of the method.
A pathology foundation model for cancer diagnosis and prognosis prediction
Histopathology image evaluation is indispensable for cancer diagnoses and subtype classification. Standard artificial intelligence methods for histopathology image analyses have focused on optimizing specialized models for each diagnostic task 1 , 2 . Although such methods have achieved some success, they often have limited generalizability to images generated by different digitization protocols or samples collected from different populations 3 . Here, to address this challenge, we devised the Clinical Histopathology Imaging Evaluation Foundation (CHIEF) model, a general-purpose weakly supervised machine learning framework to extract pathology imaging features for systematic cancer evaluation. CHIEF leverages two complementary pretraining methods to extract diverse pathology representations: unsupervised pretraining for tile-level feature identification and weakly supervised pretraining for whole-slide pattern recognition. We developed CHIEF using 60,530 whole-slide images spanning 19 anatomical sites. Through pretraining on 44 terabytes of high-resolution pathology imaging datasets, CHIEF extracted microscopic representations useful for cancer cell detection, tumour origin identification, molecular profile characterization and prognostic prediction. We successfully validated CHIEF using 19,491 whole-slide images from 32 independent slide sets collected from 24 hospitals and cohorts internationally. Overall, CHIEF outperformed the state-of-the-art deep learning methods by up to 36.1%, showing its ability to address domain shifts observed in samples from diverse populations and processed by different slide preparation methods. CHIEF provides a generalizable foundation for efficient digital pathology evaluation for patients with cancer. A study describes the development of a generalizable foundation machine learning framework to extract pathology imaging features for cancer diagnosis and prognosis prediction.
Machine learning with a reject option: a survey
Machine learning models always make a prediction, even when it is likely to be inaccurate. This behavior should be avoided in many decision support applications, where mistakes can have severe consequences. Albeit already studied in 1970, machine learning with rejection recently gained interest. This machine learning subfield enables machine learning models to abstain from making a prediction when likely to make a mistake. This survey aims to provide an overview on machine learning with rejection. We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection, which we carefully formalize. Moreover, we review and categorize strategies to evaluate a model’s predictive and rejective quality. Additionally, we define the existing architectures for models with rejection and describe the standard techniques for learning such models. Finally, we provide examples of relevant application domains and show how machine learning with rejection relates to other machine learning research areas.
Deep learning: new computational modelling techniques for genomics
As a data-driven science, genomics largely utilizes machine learning to capture dependencies in data and derive novel biological hypotheses. However, the ability to extract new insights from the exponentially increasing volume of genomics data requires more expressive machine learning models. By effectively leveraging large data sets, deep learning has transformed fields such as computer vision and natural language processing. Now, it is becoming the method of choice for many genomics modelling tasks, including predicting the impact of genetic variation on gene regulatory mechanisms such as DNA accessibility and splicing.This Review describes different deep learning techniques and how they can be applied to extract biologically relevant information from large, complex genomic data sets.
Scientific discovery in the age of artificial intelligence
Artificial intelligence (AI) is being increasingly integrated into scientific discovery to augment and accelerate research, helping scientists to generate hypotheses, design experiments, collect and interpret large datasets, and gain insights that might not have been possible using traditional scientific methods alone. Here we examine breakthroughs over the past decade that include self-supervised learning, which allows models to be trained on vast amounts of unlabelled data, and geometric deep learning, which leverages knowledge about the structure of scientific data to enhance model accuracy and efficiency. Generative AI methods can create designs, such as small-molecule drugs and proteins, by analysing diverse data modalities, including images and sequences. We discuss how these methods can help scientists throughout the scientific process and the central issues that remain despite such advances. Both developers and users of AI tools need a better understanding of when such approaches need improvement, and challenges posed by poor data quality and stewardship remain. These issues cut across scientific disciplines and require developing foundational algorithmic approaches that can contribute to scientific understanding or acquire it autonomously, making them critical areas of focus for AI innovation. The advances in artificial intelligence over the past decade are examined, with a discussion on how artificial intelligence systems can aid the scientific process and the central issues that remain despite advances.