Psychologists warn that AI recognizes a lack of human experience and a lack of true understanding.
Artificial Moral Advisors (AMAs) are systems based on artificial intelligence (AI) and are beginning to be designed to help you make moral decisions based on established ethical theories, principles, or guidelines . Although the prototype is being developed, the AMA is not currently being used to provide consistent, unbiased recommendations and reasonable moral advice. It is important to understand how people think about such artificial moral advisors, as machines equipped with artificial intelligence increase their technical capabilities and move them into the moral realm.
A study led by the University of Kent School of Psychology has explored how people perceive these advisors and whether they trust their judgments compared to human advisors. While artificial intelligence may offer fair and reasonable advice, we have found that people do not fully trust it in order to make ethical decisions about moral dilemma.
This study, published in the Journal Cognition, shows that people have a great dislike for AMA (humans). Advice was given based on practical principles (actions that could have a positive impact on the majority). Advisors who gave non-urbanist advice (e.g., adhere to moral rules rather than maximizing outcomes) were more trusted, especially in a dilemma with direct harm. This suggests that we value advisors (humans or AI) who are in line with the principles of prioritizing individuals over abstract outcomes.
Even if participants agree to the AMA decision, they still anticipate opposition to AI in the future, showing inherent skepticism.
Dr. Jim Everett led research in Kent alongside Dr. Simon Myers of Warwick.
“Reliance on moral AI is not just about accuracy or consistency. It fits human values and expectations. Our research aims to adopt AMA and design systems that people really trust,” said Everett. It highlights the key challenges to do so: advances in technology could lead to AMA being integrated into decision-making processes, from healthcare to legal systems.
More Information: Simon Myers et al., People hope that artificial moral advisors will be more utilitarian and distrustful utilitarian moral advisors, cognitive (2024). doi: 10.1016/j.cognition.2024.106028
Provided by Kent University
Citation: The study found skepticism about AI in the role of moral decisions (2025, February 10th). Retrieved from February 10, 2025 https://techxplore.com/news/2025-02-skeptism-ai-moral-decision rowlos.html
This document is subject to copyright. Apart from fair transactions for private research or research purposes, there is no part that is reproduced without written permission. Content is provided with information only.