Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
963 result(s) for "black-box model"
Sort by:
Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
Recent years have seen a tremendous growth in Artificial Intelligence (AI)-based methodological development in a broad range of domains. In this rapidly evolving field, large number of methods are being reported using machine learning (ML) and Deep Learning (DL) models. Majority of these models are inherently complex and lacks explanations of the decision making process causing these models to be termed as 'Black-Box'. One of the major bottlenecks to adopt such models in mission-critical application domains, such as banking, e-commerce, healthcare, and public services and safety, is the difficulty in interpreting them. Due to the rapid proleferation of these AI models, explaining their learning and decision making process are getting harder which require transparency and easy predictability. Aiming to collate the current state-of-the-art in interpreting the black-box models, this study provides a comprehensive analysis of the explainable AI (XAI) models. To reduce false negative and false positive outcomes of these back-box models, finding flaws in them is still difficult and inefficient. In this paper, the development of XAI is reviewed meticulously through careful selection and analysis of the current state-of-the-art of XAI research. It also provides a comprehensive and in-depth evaluation of the XAI frameworks and their efficacy to serve as a starting point of XAI for applied and theoretical researchers. Towards the end, it highlights emerging and critical issues pertaining to XAI research to showcase major, model-specific trends for better explanation, enhanced transparency, and improved prediction accuracy.
Explainable artificial intelligence: a comprehensive review
Thanks to the exponential growth in computing power and vast amounts of data, artificial intelligence (AI) has witnessed remarkable developments in recent years, enabling it to be ubiquitously adopted in our daily lives. Even though AI-powered systems have brought competitive advantages, the black-box nature makes them lack transparency and prevents them from explaining their decisions. This issue has motivated the introduction of explainable artificial intelligence (XAI), which promotes AI algorithms that can show their internal process and explain how they made decisions. The number of XAI research has increased significantly in recent years, but there lacks a unified and comprehensive review of the latest XAI progress. This review aims to bridge the gap by discovering the critical perspectives of the rapidly growing body of research associated with XAI. After offering the readers a solid XAI background, we analyze and review various XAI methods, which are grouped into (i) pre-modeling explainability, (ii) interpretable model, and (iii) post-modeling explainability. We also pay attention to the current methods that dedicate to interpret and analyze deep learning methods. In addition, we systematically discuss various XAI challenges, such as the trade-off between the performance and the explainability, evaluation methods, security, and policy. Finally, we show the standard approaches that are leveraged to deal with the mentioned challenges.
Considerations when learning additive explanations for black-box models
Many methods to explain black-box models, whether local or global, are additive. In this paper, we study global additive explanations for non-additive models, focusing on four explanation methods: partial dependence, Shapley explanations adapted to a global setting, distilled additive explanations, and gradient-based explanations. We show that different explanation methods characterize non-additive components in a black-box model’s prediction function in different ways. We use the concepts of main and total effects to anchor additive explanations, and quantitatively evaluate additive and non-additive explanations. Even though distilled explanations are generally the most accurate additive explanations, non-additive explanations such as tree explanations that explicitly model non-additive components tend to be even more accurate. Despite this, our user study showed that machine learning practitioners were better able to leverage additive explanations for various tasks. These considerations should be taken into account when considering which explanation to trust and use to explain black-box models.
Wastewater treatment plant performance analysis using artificial intelligence – an ensemble approach
In the present study, three different artificial intelligence based non-linear models, i.e. feed forward neural network (FFNN), adaptive neuro fuzzy inference system (ANFIS), support vector machine (SVM) approaches and a classical multi-linear regression (MLR) method were applied for predicting the performance of Nicosia wastewater treatment plant (NWWTP), in terms of effluent biological oxygen demand (BODeff), chemical oxygen demand (CODeff) and total nitrogen (TNeff). The daily data were used to develop single and ensemble models to improve the prediction ability of the methods. The obtained results of single models proved that, ANFIS model provides effective outcomes in comparison with single models. In the ensemble modeling, simple averaging ensemble, weighted averaging ensemble and neural network ensemble techniques were proposed subsequently to improve the performance of the single models. The results showed that in prediction of BODeff, the ensemble models of simple averaging ensemble (SAE), weighted averaging ensemble (WAE) and neural network ensemble (NNE), increased the performance efficiency of artificial intelligence (AI) modeling up to 14%, 20% and 24% at verification phase, respectively, and less than or equal to 5% for both CODeff and TNeff in calibration phase. This shows that NNE model is more robust and reliable ensemble method for predicting the NWWTP performance due to its non-linear averaging kernel.
Gradient methods for minimizing composite functions
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate ), and an accelerated multistep version with convergence rate , where is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.
Energy Modeling and Model Predictive Control for HVAC in Buildings: A Review of Current Research Trends
Buildings use up to 40% of the global primary energy and 30% of global greenhouse gas emissions, which may significantly impact climate change. Heating, ventilation, and air-conditioning (HVAC) systems are among the most significant contributors to global primary energy consumption and carbon gas emissions. Furthermore, HVAC energy demand is expected to rise in the future. Therefore, advancements in HVAC systems’ performance and design would be critical for mitigating worldwide energy and environmental concerns. To make such advancements, energy modeling and model predictive control (MPC) play an imperative role in designing and operating HVAC systems effectively. Building energy simulations and analysis techniques effectively implement HVAC control schemes in the building system design and operation phases, and thus provide quantitative insights into the behaviors of the HVAC energy flow for architects and engineers. Extensive research and advanced HVAC modeling/control techniques have emerged to provide better solutions in response to the issues. This study reviews building energy modeling techniques and state-of-the-art updates of MPC in HVAC applications based on the most recent research articles (e.g., from MDPI’s and Elsevier’s databases). For the review process, the investigation of relevant keywords and context-based collected data is first carried out to overview their frequency and distribution comprehensively. Then, this review study narrows the topic selection and search scopes to focus on relevant research papers and extract relevant information and outcomes. Finally, a systematic review approach is adopted based on the collected review and research papers to overview the advancements in building system modeling and MPC technologies. This study reveals that advanced building energy modeling is crucial in implementing the MPC-based control and operation design to reduce building energy consumption and cost. This paper presents the details of major modeling techniques, including white-box, grey-box, and black-box modeling approaches. This paper also provides future insights into the advanced HVAC control and operation design for researchers in relevant research and practical fields.
The grammar of interactive explanatory model analysis
The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory, interpretations of the same phenomenon. Surprisingly, most methods developed for explainable and responsible machine learning focus on a single-aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper proposes how different Explanatory Model Analysis (EMA) methods complement each other and discusses why it is essential to juxtapose them. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe human-model interaction. It is implemented in a widely used human-centered open-source software framework that adopts interactivity, customizability and automation as its main traits. We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model may increase the accuracy and confidence of human decision making.
Explainable extreme boosting model for breast cancer diagnosis
This study investigates the Shapley additive explanation (SHAP) of the extreme boosting (XGBoost) model for breast cancer diagnosis. The study employed Wisconsin’s breast cancer dataset, characterized by 30 features extracted from an image of a breast cell. SHAP module generated different explainer values representing the impact of a breast cancer feature on breast cancer diagnosis. The experiment computed SHAP values of 569 samples of the breast cancer dataset. The SHAP explanation indicates perimeter and concave points have the highest impact on breast cancer diagnosis. SHAP explains the XGB model diagnosis outcome showing the features affecting the XGBoost model. The developed XGB model achieves an accuracy of 98.42%.
Building Energy Prediction Models and Related Uncertainties: A Review
Building energy usage has been an important issue in recent decades, and energy prediction models are important tools for analysing this problem. This study provides a comprehensive review of building energy prediction models and uncertainties in the models. First, this paper introduces three types of prediction methods: white-box models, black-box models, and grey-box models. The principles, strengths, shortcomings, and applications of every model are discussed systematically. Second, this paper analyses prediction model uncertainties in terms of human, building, and weather factors. Finally, the research gaps in predicting building energy consumption are summarised in order to guide the optimisation of building energy prediction methods.
Explainable deep learning approach for advanced persistent threats (APTs) detection in cybersecurity: a review
In recent years, Advanced Persistent Threat (APT) attacks on network systems have increased through sophisticated fraud tactics. Traditional Intrusion Detection Systems (IDSs) suffer from low detection accuracy, high false-positive rates, and difficulty identifying unknown attacks such as remote-to-local (R2L) and user-to-root (U2R) attacks. This paper addresses these challenges by providing a foundational discussion of APTs and the limitations of existing detection methods. It then pivots to explore the novel integration of deep learning techniques and Explainable Artificial Intelligence (XAI) to improve APT detection. This paper aims to fill the gaps in the current research by providing a thorough analysis of how XAI methods, such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), can make black-box models more transparent and interpretable. The objective is to demonstrate the necessity of explainability in APT detection and propose solutions that enhance the trustworthiness and effectiveness of these models. It offers a critical analysis of existing approaches, highlights their strengths and limitations, and identifies open issues that require further research. This paper also suggests future research directions to combat evolving threats, paving the way for more effective and reliable cybersecurity solutions. Overall, this paper emphasizes the importance of explainability in enhancing the performance and trustworthiness of cybersecurity systems.