Bad news? Send AI. Good news? Be human

We don’t think AI has bad intentions, so we are up to 2.6 times more likely to accept bad news given by them (e.g. accept a higher-than-expected price).

🤖 This is a Science Says special of the latest scientific research in AI 🎓 Use it to get better, evidence-based results when using AI in your products and marketing 📈

This insight is brought to you by… Conjointly

Successful businesses make decisions based on real data, from real consumers. 

Conjointly is an all-in-one survey research platform with easy-to-use advanced tools and expert support. Whether you're optimising product pricing, refining claims and messaging, perfecting your product range, or exploring other research questions, Conjointly helps you make data-driven decisions with confidence.

Want to sponsor Science Says? Here’s all you need to know.

📝 Intro

You’ve put together a new pricing calculator on your website. After filling out the form, people will receive an email giving them a customized quote based on their requirements. 

Your system is all set up, but you’re finalizing the details for emailing your customers. You’re debating between two options for your automated emails: 

  1. Using a bot “Dani” with a picture of a human Avatar and a human-like, friendly tone

  2. Using an obviously automated bot without a human picture or name, sending a basic email sharing their pricing quote.

Which of these would make it more likely for your customer to accept your offer, and buy? Turns out, that what you choose can make a big difference.

Want hundreds more insights like these? Explore all Science Says insights here.

People are more likely to accept bad news if it’s given by an AI (vs a person or even a human-like AI)

Topics: AI* | Customer experience | Pricing
For: B2C, Can be tested for B2B
Research date: December, 2021
Universities: University of Kentucky, University of Technology Sydney, University of Illinois

*New AI category: As more AI studies are getting published and we cover them in Science Says, we’re creating a new AI category on the Science Says Platform.

📈 Recommendation

If you are giving bad news (e.g. delay, cancellation, rejection) or an offer that is probably worse than what the recipient expects (e.g. higher price than expected for an upgrade), make this offer come from a non-human like AI-bot.

If you are giving good news (e.g. free upgrade, a gift) or making an offer that is probably better than what your customer is expecting (e.g. a lower price), then use an actual person or use an AI that is humanlike (e.g. has a name and a face, does not write robotically).

People will be more likely to accept the offer, and more satisfied with the experience.

🎓 Findings

  • People prefer and are more likely to accept bad news, or higher prices when they are conveyed by an obviously AI or automated bot, rather than humans or human-like bots. The opposite happens with good or better-than-expected news.

  • As part of a series of 5 experiments, researchers found that people:

    • Were 2.6x times more likely to accept a price worse than they expected for a concert ticket being resold when the offer was given by a bot versus a bot that seemed human-like (had a name and human-like picture)

    • Were 1.17x times more likely to accept an offer for a concert ticket at a better price than they expected when the price was offered by a human compared to a bot

    • Were 1.25x more likely to accept, and 1.17x more satisfied with, a higher than expected price for an Uber ride when the price was given by an obviously AI-bot

    • When a human-like bot offered a worse-than-expected price, 83% of people wanted to change to a different customer agent.

  • The effect is stronger for:

    • Bad news when the bot is considered less human-like

    • Good news the more human-like the bot seems

🧠 Why it works

  • The way we react to something is largely influenced by whether we think it’s done intentionally or not.

  • When we’re dealing with an AI bot, we don’t think their actions are done on purpose, making us less upset about bad news and less happy about the good news they give us.

  • In contrast, when a human gives us bad news, we assume it is being done intentionally (e.g. charging more to increase profits), making us more upset about the news.

  • When we receive news that makes us happy, like being charged less for an item, we give a human telling us this news credit for caring about us, while an AI doing the same is just following their programming.

  • The more human-like an AI is (e.g. its tone and profile picture), the more likely we are to ascribe human-like intentions to it.

🔝 MAD//Masters with Rory Sutherland - Raise your status in your organisation

MAD//Masters is an online course that helps you understand the psychology that underpins consumer behaviour - combine it with creativity - and unlock that creative magic that drives results.

Through 10 modules, monthly live sessions, and loads of bonus content, Rory Sutherland passes on all of the wisdom he has developed throughout his career.

This announcement was sponsored. Want your brand here? Click here.

Limitations

  • The research focused exclusively on prices, looking at concert prices and Uber fees. It’s very likely, but not tested directly, that the effect would be the same for other communication, like delays, changes, or cancellations.

  • As AI becomes more and more prevalent, people’s views towards interacting with AI are also rapidly evolving. It’s likely the strength of this effect would change as interacting with AI becomes more prevalent.

  • It’s unclear if this effect would hold in back-and-forth negotiations (compared to a price being offered and a choice given to accept or not).

  • While not tested, it’s likely the power of AI bots would be enhanced in physical interactions (e.g. dealing with a robot customer rep).

🏢 Companies using this

  • Many, if not most, company websites now use AI-powered chatbots, especially for customer support, but often include an option to escalate to a human agent if needed.

  • Companies have different levels of chatbot ‘humanness’, but this often does not seem to be thought out intentionally. For example:

    • Eyeglasses company eye-oo, uses human cartoon avatars without names to help customers find glasses and lenses suitable for them.

    • Companies like office furniture retailer Beltton, travel refund site AirHelp, and real estate site Endeska make clear their use of AI bots - their chats don’t include names or profile pictures and start with various options for users to choose the topic relevant to them before beginning a chat.

  • While pricing calculators powered by AI are relatively common, with companies like CartBuddyGPT, Idea Link, and Calculator Studio providing plug-ins and APIs, these generally seem built to support sales and customer service teams, rather than to serve as a customer-facing chatbot. The Vtiger chatbot by Calculus AI is one of the rare ones that combines both pricing functionality with a chatbot for use on websites.  

  • There don’t yet appear to be companies explicitly using different types of customer service agents based on the type of news being delivered. However, companies including CRM Hubspot and email outreach service Instantly use AI-powered bots with non-human Avatars to begin conversations before switching the conversation to a “human” agent with a name and human communication tone.

The Vtiger chatbot offers a chatbot functionality for websites that integrates with a pricing calculator - although it’s not designed to easily adapt the ‘humanness’ of the chatbot based on the offers it gives

⚡ Steps to implement

  • Depending on whether you’re sharing good or bad news, you can adapt your bots to make them seem more or less human-like.

    • For good news: Make your customer service bots seem more human-like by giving them a realistic profile picture and human name. When configuring your bot, make sure to include that you want their tone to sound human-like

    • For bad news: Make it clear your AI bot is not human. This includes giving an obviously non-human name (or no name), a robot avatar picture (or no picture), and programming your bot to use a tone that’s clearly not human. You can also be explicit by starting the conversation with a clarifying note such as “Connecting you to our AI agent”.

  • If you give custom price quotes, you can use a pricing calculator on your website, where customers can enter in relevant details of their purchase, before being given a pricing quote. To determine whether your prices would be considered high or low by your customers, you can use two options:

    • Benchmark against competitors: If your price for a similar product or service is higher or lower than others available in the market.

    • Median prices: If the quote being requested is higher or lower than the median price for requests you receive.

    • You can use the Science Says Playbook of Pricing & Promotions Optimization to research and arrive at your optimal prices.

🔍 Study type

Online experiments

📖 Research

Bad News? Send an AI. Good News? Send a Human. Journal of Marketing (December, 2021)

🏫 Researchers

Remember: This is a scientific discovery. In the future it will probably be better understood and could even be proven wrong (that’s how science works). It may also not be generalizable to your situation. If it’s a risky change, always test it on a small scale before rolling it out widely.

What did you think of today's insight?

Help me make the next insights 🎓 even more useful 📈

Login or Subscribe to participate in polls.

Here is how else Science Says can help your marketing:

  • 📈 Join the Science Says Platform to unlock all 250+ insights, real-world case studies, and exclusive playbooks

  • 📘 Boost your sales and profits with topic-specific Science-based Playbooks (e.g. Pricing, Ecommerce, SaaS, AI)

  • 🔬 Get on-demand evidence to make better decisions. My team of PhDs and I regularly help leading brands in FMCG (e.g. Mars), retail, and tech. Reach out here.

🎓 It took 3 of us 13 hours to accurately turn this 16 page research paper into this 3min insight. 

If you enjoyed it please share it with a friend, or share it on LinkedIn and tag me (Thomas McKinlay), I’d love to engage and amplify! 

If this was forwarded by a friend you can subscribe below for $0 👇