Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
Feature selection strategies: a comparative analysis of SHAP-value and importance-based methods
by
Liang, Qianxin
, Khoshgoftaar, Taghi M
, Hancock, John T
, Wang, Huanjing
in
Big Data
/ Classification
/ Classifiers
/ Comparative analysis
/ Credit
/ Credit card fraud
/ Credit cards
/ Datasets
/ Decision making
/ Decision trees
/ Experiments
/ Feature selection
/ Fraud
/ Fraud prevention
/ Performance evaluation
/ Statistical tests
/ Trees
/ Values
2024
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
Feature selection strategies: a comparative analysis of SHAP-value and importance-based methods
by
Liang, Qianxin
, Khoshgoftaar, Taghi M
, Hancock, John T
, Wang, Huanjing
in
Big Data
/ Classification
/ Classifiers
/ Comparative analysis
/ Credit
/ Credit card fraud
/ Credit cards
/ Datasets
/ Decision making
/ Decision trees
/ Experiments
/ Feature selection
/ Fraud
/ Fraud prevention
/ Performance evaluation
/ Statistical tests
/ Trees
/ Values
2024
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
Feature selection strategies: a comparative analysis of SHAP-value and importance-based methods
by
Liang, Qianxin
, Khoshgoftaar, Taghi M
, Hancock, John T
, Wang, Huanjing
in
Big Data
/ Classification
/ Classifiers
/ Comparative analysis
/ Credit
/ Credit card fraud
/ Credit cards
/ Datasets
/ Decision making
/ Decision trees
/ Experiments
/ Feature selection
/ Fraud
/ Fraud prevention
/ Performance evaluation
/ Statistical tests
/ Trees
/ Values
2024
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
Feature selection strategies: a comparative analysis of SHAP-value and importance-based methods
Journal Article
Feature selection strategies: a comparative analysis of SHAP-value and importance-based methods
2024
Request Book From Autostore
and Choose the Collection Method
Overview
In the context of high-dimensional credit card fraud data, researchers and practitioners commonly utilize feature selection techniques to enhance the performance of fraud detection models. This study presents a comparison in model performance using the most important features selected by SHAP (SHapley Additive exPlanations) values and the model’s built-in feature importance list. Both methods rank features and choose the most significant ones for model assessment. To evaluate the effectiveness of these feature selection techniques, classification models are built using five classifiers: XGBoost, Decision Tree, CatBoost, Extremely Randomized Trees, and Random Forest. The Area under the Precision-Recall Curve (AUPRC) serves as the evaluation metric. All experiments are executed on the Kaggle Credit Card Fraud Detection Dataset. The experimental outcomes and statistical tests indicate that feature selection methods based on importance values outperform those based on SHAP values across classifiers and various feature subset sizes. For models trained on larger datasets, it is recommended to use the model’s built-in feature importance list as the primary feature selection method over SHAP. This suggestion is based on the rationale that computing SHAP feature importance is a distinct activity, while models naturally provide built-in feature importance as part of the training process, requiring no additional effort. Consequently, opting for the model’s built-in feature importance list can offer a more efficient and practical approach for larger datasets and more intricate models.
Publisher
Springer Nature B.V
Subject
This website uses cookies to ensure you get the best experience on our website.