Asset Details
MbrlCatalogueTitleDetail
Do you wish to reserve the book?
LLM ethics benchmark: a three-dimensional assessment system for evaluating moral reasoning in large language models
by
Chen, Kevin
, Atkinson, David
, Jiao, Junfeng
, Murali, Abhejay
, Afroogh, Saleh
, Dhurandhar, Amit
in
639/705/1042
/ 639/705/258
/ AI alignment
/ Artificial intelligence
/ Artificial Intelligence - ethics
/ Benchmark datasets
/ Benchmarking
/ Cognition & reasoning
/ Decision making
/ Decision Making - ethics
/ Ethics
/ Humanities and Social Sciences
/ Humans
/ Language
/ Large Language Models
/ LLM
/ Moral reasoning
/ Morals
/ multidisciplinary
/ Responsible AI
/ Science
/ Science (multidisciplinary)
2025
Hey, we have placed the reservation for you!
By the way, why not check out events that you can attend while you pick your title.
You are currently in the queue to collect this book. You will be notified once it is your turn to collect the book.
Oops! Something went wrong.
Looks like we were not able to place the reservation. Kindly try again later.
Are you sure you want to remove the book from the shelf?
LLM ethics benchmark: a three-dimensional assessment system for evaluating moral reasoning in large language models
by
Chen, Kevin
, Atkinson, David
, Jiao, Junfeng
, Murali, Abhejay
, Afroogh, Saleh
, Dhurandhar, Amit
in
639/705/1042
/ 639/705/258
/ AI alignment
/ Artificial intelligence
/ Artificial Intelligence - ethics
/ Benchmark datasets
/ Benchmarking
/ Cognition & reasoning
/ Decision making
/ Decision Making - ethics
/ Ethics
/ Humanities and Social Sciences
/ Humans
/ Language
/ Large Language Models
/ LLM
/ Moral reasoning
/ Morals
/ multidisciplinary
/ Responsible AI
/ Science
/ Science (multidisciplinary)
2025
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
Do you wish to request the book?
LLM ethics benchmark: a three-dimensional assessment system for evaluating moral reasoning in large language models
by
Chen, Kevin
, Atkinson, David
, Jiao, Junfeng
, Murali, Abhejay
, Afroogh, Saleh
, Dhurandhar, Amit
in
639/705/1042
/ 639/705/258
/ AI alignment
/ Artificial intelligence
/ Artificial Intelligence - ethics
/ Benchmark datasets
/ Benchmarking
/ Cognition & reasoning
/ Decision making
/ Decision Making - ethics
/ Ethics
/ Humanities and Social Sciences
/ Humans
/ Language
/ Large Language Models
/ LLM
/ Moral reasoning
/ Morals
/ multidisciplinary
/ Responsible AI
/ Science
/ Science (multidisciplinary)
2025
Please be aware that the book you have requested cannot be checked out. If you would like to checkout this book, you can reserve another copy
We have requested the book for you!
Your request is successful and it will be processed during the Library working hours. Please check the status of your request in My Requests.
Oops! Something went wrong.
Looks like we were not able to place your request. Kindly try again later.
LLM ethics benchmark: a three-dimensional assessment system for evaluating moral reasoning in large language models
Journal Article
LLM ethics benchmark: a three-dimensional assessment system for evaluating moral reasoning in large language models
2025
Request Book From Autostore
and Choose the Collection Method
Overview
This study establishes a novel framework for systematically evaluating the moral reasoning capabilities of large language models (LLMs) as they increasingly integrate into critical societal domains. Current assessment methodologies lack the precision needed to evaluate nuanced ethical decision-making in AI systems, creating significant accountability gaps. Our framework addresses this challenge by quantifying alignment with human ethical standards through three dimensions: foundational moral principles, reasoning robustness, and value consistency across diverse scenarios. This approach enables precise identification of ethical strengths and weaknesses in LLMs, facilitating targeted improvements and stronger alignment with societal values. To promote transparency and collaborative advancement in ethical AI development, we are publicly releasing both our benchmark datasets and evaluation codebase at
https://github.com/The-Responsible-AI-Initiative/LLM_Ethics_Benchmark.git
.
Publisher
Nature Publishing Group UK,Nature Publishing Group,Nature Portfolio
Subject
This website uses cookies to ensure you get the best experience on our website.