- Science Says
- Posts
- Humanize your AI to reduce fraud
Humanize your AI to reduce fraud
People are 18.5% less likely to act unethically (e.g. try to commit fraud) with AI chatbots that have human-like features (e.g. a name and avatar, speak less formally).
Topics: AI | Website/App | Customer Experience
For: B2C. Can be tested for B2B
Research date: March 2022
Universities: University of Technology Sydney. Sungkyunkwan University. University of California, San Diego. Concordia University. University of Illinois Chicago.
š Intro
Imagine you're building a chatbot for a fashion ecommerce brand. One of its tasks will be handling returns and refunds.
However, you suspect that customers may try to game the system (e.g. report a fake reason, such as products containing defects, to get a full refund) and this will create losses due to fraud, rather than savings via optimized customer support.
Science says that if you make some small design tweaks to your chatbot, you can drastically cut fraud attempts.
P.S.: This insight is 1 of the 50 covered in our Science-based Playbook of AI Best Practices.
š Recommendation
Make your AI chatbots more human-like (e.g. give it a name, or a human-like avatar) or let customers know the AI is working alongside or monitored by a person.
This will make people feel more responsible for their actions and act more ethically (e.g. they will not try to provide false reasons for getting refunds).

š Findings
People are more likely to act unethically (e.g. request a refund for false technical issues) when interacting with AI, compared to human support agents or an AI that feels more human.
As part of 4 experiments, researchers found that people, when interacting with AI (vs a human support agent):
Were 34.7% more likely to act unethically (e.g. illegitimately receive benefits)
Were 10.6% more likely to report a false reason for returns to get lower shipping costs
Felt 19.5% less guilty when acting unethically
However, people are 18.5% less likely to act unethically with chatbots that have human-like features (e.g. human name), compared to ones that do not.
š§ Why it works
When we act unfairly, we experience guilt. The fear of feeling guilty is one of the main things preventing our unethical behavior.
We perceive AI as lacking emotional capacity and moral judgment.
So we feel less guilt about being unfair to it and become more likely to act unethically.
When AI has human-like features, our interaction with it feels more human, which makes us perceive the AI as more human.
This creates a stronger emotional connection, which makes us avoid guilt and encourages ethical behavior.
ā Limitations
The study looked at different types of AI (e.g. Amazon Alexa, the robot Nao, AI chatbots) making this broadly applicable, but the effect may vary across different cultures and depending on how familiar the user is with AIs (e.g. unfamiliar users might underestimate its capabilities and act more unethically).
The study didnāt consider how intimidating AI design features (e.g. serious facial expressions or tone) could affect customersā perception of the chatbotās ability to detect dishonesty. For example, a more intimidating facial expression and authoritative tone might further deter unethical behavior.
š¢ Companies using this
Different companies humanize chatbots to varying degrees. This may affect how ethically users act.
Some companies opt for minimal humanization:
H&Mās chatbot processes returns but lacks human-like traits in design and tone (e.g. logo as a profile picture, robotic responses). This may lead to more unethical behavior (e.g. getting undue refunds).
Grammarlyās chatbot introduces itself as a āsupport bot,ā reinforcing a cold, tech-driven interaction that could encourage unethical account management.
Other companies fully humanize chatbots for sensitive tasks:
Finnish media conglomerate A-Lehdetās AI, "Aaro," manages subscriptions and uses a common Finnish name, emojis, and a warm tone that encourages ethical behavior.
Bank of Americaās chatbot "Erica" adapts its tone to clientsā preferences. For sensitive issues, a human takes over, ensuring requests follow bank policies and reducing misuse.
Project management tool Wrike gave its chatbot human-like traits with a human-looking profile picture and a friendly tone, even addressing users by their names. Giving the bot a human name could further improve this.

ā” Steps to implement
If youāre in retail or provide customer service where fraud could be an issue, build your chatbot to have human-like features (e.g. name or friendly tone) to avoid unethical user behavior.
Align the chatbotās human-like qualities to the task. For high-risk tasks (e.g. refund claims), enhance human likeness to encourage ethical behavior. For simpler tasks (e.g. order tracking), a neutral tone is sufficient.
For critical tasks with high stakes for unethical behavior (e.g. significant refund requests), notify customers that a bot is only the first touch point and a human agent will step in to take over.
š Study type
Online experiments.
š Research
AI increases unethical consumer behavior due to reduced anticipatory guilt. Journal of the Academy of Marketing Science (March 2022).
š« Researchers
Tae Woo Kim. University of Technology Sydney
Hyejin Lee. Sungkyunkwan University
Michelle Yoosun Kim. University of California, San Diego
SunAh Kim. Concordia University
Adam Duhachek. University of Illinois Chicago
Remember: This is a scientific discovery. In the future it will probably be better understood and could even be proven wrong (thatās how science works). It may also not be generalizable to your situation. If itās a risky change, always test it on a small scale before rolling it out widely.