Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
3 result(s) for "Frank, Darius-Aurel"
Sort by:
Human decision-making biases in the moral dilemmas of autonomous vehicles
The development of artificial intelligence has led researchers to study the ethical principles that should guide machine behavior. The challenge in building machine morality based on people’s moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how people’s personal perspectives and decision-making modes affect their decisions in the moral dilemmas faced by autonomous vehicles. Moreover, it determines the variations in people’s moral decisions that can be attributed to the situational factors of the dilemmas. The reported studies demonstrate that people’s moral decisions, regardless of the presented dilemma, are biased by their decision-making mode and personal perspective. Under intuitive moral decisions, participants shift more towards a deontological doctrine by sacrificing the passenger instead of the pedestrian. In addition, once the personal perspective is made salient participants preserve the lives of that perspective, i.e. the passenger shifts towards sacrificing the pedestrian, and vice versa. These biases in people’s moral decisions underline the social challenge in the design of a universal moral code for autonomous vehicles. We discuss the implications of our findings and provide directions for future research.
Drivers and social implications of Artificial Intelligence adoption in healthcare during the COVID-19 pandemic
The COVID-19 pandemic continues to impact people worldwide–steadily depleting scarce resources in healthcare. Medical Artificial Intelligence (AI) promises a much-needed relief but only if the technology gets adopted at scale. The present research investigates people’s intention to adopt medical AI as well as the drivers of this adoption in a representative study of two European countries (Denmark and France, N = 1068) during the initial phase of the COVID-19 pandemic. Results reveal AI aversion; only 1 of 10 individuals choose medical AI over human physicians in a hypothetical triage-phase of COVID-19 pre-hospital entrance. Key predictors of medical AI adoption are people’s trust in medical AI and, to a lesser extent, the trait of open-mindedness. More importantly, our results reveal that mistrust and perceived uniqueness neglect from human physicians, as well as a lack of social belonging significantly increase people’s medical AI adoption. These results suggest that for medical AI to be widely adopted, people may need to express less confidence in human physicians and to even feel disconnected from humanity. We discuss the social implications of these findings and propose that successful medical AI adoption policy should focus on trust building measures–without eroding trust in human physicians.
In companies we trust: consumer adoption of artificial intelligence services and the role of trust in companies and AI autonomy
PurposeCompanies utilize increasingly capable Artificial Intelligence (AI) technologies to deliver modern services across a range of consumer service industries. AI autonomy, however, sparks skepticism among consumers leading to a decrease in their willingness to adopt AI services. This raises the question as to whether consumer trust in companies can overcome consumer reluctance in their decisions to adopt high (vs low) autonomy AI services.Design/methodology/approachUsing a representative survey (N = 503 consumers corresponding to N = 3,690 observations), this article investigated the link between consumer trust in a company and consumers' intentions to adopt high (vs low) autonomy AI services from the company across 23 consumer service companies accounting for six distinct service industries.FindingsThe results confirm a significant and positive relationship between consumer trust in a company and consumers' intentions to adopt AI services from the same company. AI autonomy, however, moderates this relationship, such that high (vs low) AI autonomy weakens the positive link between trust in a company and AI service adoption. This finding replicates across all 23 companies and the associated six industries and is robust to the inclusion of several theoretically important control variables.Originality/valueThe current research contributes to the recent stream of AI research by drawing attention to the interplay between trust in companies and adoption of high autonomy AI services, with implications for the successful deployment and marketing of AI services.