Humanize your AI to reduce fraud

People are 18.5% less likely to act unethically (e.g. try to commit fraud) with AI chatbots that have human-like features (e.g. a name and avatar, speak less formally).

New to Science Says? This is a 3min practical summary of a scientific study 🎓 Join 31,279 marketers who use science, not flawed opinions 📈 Subscribe here

This insight is brought to you by… Guidde

Ditch static docs - create brilliant how-to videos with Guidde!

  • Simplify tasks into step-by-step guides in minutes with AI

  • Update content instantly to stay accurate

  • Share guides in any language with ease

Make training impactful, inclusive, and fast. The browser extension is 100% free. Try Guidde today!

Want to sponsor Science Says? Here’s all you need to know.

📝 Intro

Imagine you're building a chatbot for a fashion ecommerce brand. One of its tasks will be handling returns and refunds.

However, you suspect that customers may try to game the system (e.g. report a fake reason, such as products containing defects, to get a full refund) and this will create losses due to fraud, rather than savings via optimized customer support. 

Science says that if you make some small design tweaks to your chatbot, you can drastically cut fraud attempts.

P.S.: This insight is 1 of the 50 covered in our Science-based Playbook of AI Best Practices.

Want hundreds more insights like these? Explore all Science Says insights here.

Humanize AI chatbots to reduce unethical user behavior

Topics: AI | Website/App | Customer Experience
For: B2C. Can be tested for B2B 
Research date: March 2022
Universities: University of Technology Sydney. Sungkyunkwan University. University of California, San Diego. Concordia University. University of Illinois Chicago.

📈 Recommendation

Make your AI chatbots more human-like (e.g. give it a name, or a human-like avatar) or let customers know the AI is working alongside or monitored by a person. 

This will make people feel more responsible for their actions and act more ethically (e.g. they will not try to provide false reasons for getting refunds).

🎓 Findings

  • People are more likely to act unethically (e.g. request a refund for false technical issues) when interacting with AI, compared to human support agents or an AI that feels more human.

  • As part of 4 experiments, researchers found that people, when interacting with AI (vs a human support agent):

    • Were 34.7% more likely to act unethically (e.g. illegitimately receive benefits)

    • Were 10.6% more likely to report a false reason for returns to get lower shipping costs 

    • Felt 19.5% less guilty when acting unethically 

  • However, people are 18.5% less likely to act unethically with chatbots that have human-like features (e.g. human name), compared to ones that do not. 

🧠 Why it works

  • When we act unfairly, we experience guilt. The fear of feeling guilty is one of the main things preventing our unethical behavior.

  • We perceive AI as lacking emotional capacity and moral judgment.

  • So we feel less guilt about being unfair to it and become more likely to act unethically.

  • When AI has human-like features, our interaction with it feels more human, which makes us perceive the AI as more human.

  • This creates a stronger emotional connection, which makes us avoid guilt and encourages ethical behavior.

🧩 The Missing Piece in Your SEO Strategy

SEO is competitive, and the brands ranking at the top aren’t leaving it to chance. 

Backlinks are one of the strongest ranking factors, and your competitors are already investing in them to stay ahead.

dofollow.com is a premium SaaS link-building agency that helps businesses improve their rankings on search engines like Google by providing authoritative backlinks from reputable sites such as HubSpot.

This announcement was sponsored. Want your brand here? Click here.

Limitations

  • The study looked at different types of AI (e.g. Amazon Alexa, the robot Nao, AI chatbots) making this broadly applicable, but the effect may vary across different cultures and depending on how familiar the user is with AIs (e.g. unfamiliar users might underestimate its capabilities and act more unethically).

  • The study didn’t consider how intimidating AI design features (e.g. serious facial expressions or tone) could affect customers’ perception of the chatbot’s ability to detect dishonesty. For example, a more intimidating facial expression and authoritative tone might further deter unethical behavior.

🏢 Companies using this

  • Different companies humanize chatbots to varying degrees. This may affect how ethically users act.

  • Some companies opt for minimal humanization:

  • H&M’s chatbot processes returns but lacks human-like traits in design and tone (e.g. logo as a profile picture, robotic responses). This may lead to more unethical behavior (e.g. getting undue refunds).

  • Grammarly’s chatbot introduces itself as a “support bot,” reinforcing a cold, tech-driven interaction that could encourage unethical account management.

  • Other companies fully humanize chatbots for sensitive tasks:

  • Finnish media conglomerate A-Lehdet’s AI, "Aaro," manages subscriptions and uses a common Finnish name, emojis, and a warm tone that encourages ethical behavior.

  • Bank of America’s chatbot "Erica" adapts its tone to clients’ preferences. For sensitive issues, a human takes over, ensuring requests follow bank policies and reducing misuse.

Project management tool Wrike gave its chatbot human-like traits with a human-looking profile picture and a friendly tone, even addressing users by their names. Giving the bot a human name could further improve this.

⚡ Steps to implement

  • If you’re in retail or provide customer service where fraud could be an issue, build your chatbot to have human-like features (e.g. name or friendly tone) to avoid unethical user behavior.

  • Align the chatbot’s human-like qualities to the task. For high-risk tasks (e.g. refund claims), enhance human likeness to encourage ethical behavior. For simpler tasks (e.g. order tracking), a neutral tone is sufficient.

  • For critical tasks with high stakes for unethical behavior (e.g. significant refund requests), notify customers that a bot is only the first touch point and a human agent will step in to take over.

🔍 Study type

Online experiments.

📖 Research

AI increases unethical consumer behavior due to reduced anticipatory guilt. Journal of the Academy of Marketing Science (March 2022).

🏫 Researchers

  • Tae Woo Kim. University of Technology Sydney

  • Hyejin Lee. Sungkyunkwan University

  • Michelle Yoosun Kim. University of California, San Diego

  • SunAh Kim. Concordia University

  • Adam Duhachek. University of Illinois Chicago

Remember: This is a scientific discovery. In the future it will probably be better understood and could even be proven wrong (that’s how science works). It may also not be generalizable to your situation. If it’s a risky change, always test it on a small scale before rolling it out widely.

What did you think of today's insight?

Help me make the next insights 🎓 even more useful 📈

Login or Subscribe to participate in polls.

Here is how else Science Says can help your marketing:

  • 📈 Join the Science Says Platform to unlock all 250+ insights, real-world case studies, and exclusive playbooks

  • 📘 Boost your sales and profits with topic-specific Science-based Playbooks (e.g. Pricing, Ecommerce, SaaS, AI Best Practices)

  • 🔬 Get on-demand evidence to make better decisions. My team of PhDs and I regularly help leading brands in FMCG (e.g. Mars), retail, and tech. Reach out here.

🎓 It took 3 of us 11.5 hours to accurately turn this 17 page research paper into this 3min insight. 

If you enjoyed it please share it with a friend, or share it on LinkedIn and tag me (Thomas McKinlay), I’d love to engage and amplify! 

If this was forwarded by a friend you can subscribe below for $0 👇