• Science Says
  • Posts
  • NEW: The Wharton Blueprint for AI Agent Adoption

NEW: The Wharton Blueprint for AI Agent Adoption

Launching our latest collaboration with Wharton. With insights from the latest science, Wharton researchers, and business leaders from Google, Zapier, ServiceNow, and more.

TL;DR: We partnered with Wharton again to solve the biggest problem in AI right now: the psychological barriers stopping people from adopting AI agents. Download it (for free!) here.

The reputation we’ve built over the years in the academic community for how we cover their research (practically, extremely accurately, and with no sugar-coating) is one of the things I’m most proud of.

It’s thanks to this that we get to work with an incredible network of top researchers and universities. Now, for the second year in a row, we’re partnering with the #1 business research university in the world (FT Rankings 2026), The Wharton School of the University of Pennsylvania.

Last year, we explained how to design highly effective AI Chatbots (download the previous Blueprint here). This year, we’re solving the biggest challenge people and organizations are facing in AI right now: AI agent adoption.

What I find especially interesting about this problem? It’s all about psychology (much like what we regularly cover in Science Says).

Intrigued? Read on.

🎓 The problem with AI agents is not tech, it’s psychology

AI agents are here, and they work. The difference compared to conversational AI (the ChatGPT, Perplexity, Claude conversations we’re used to by now) is that agents actually do things.

You can ask an AI agent to not only research and give suggestions for your weekend trip to Rome, but tell it to actually choose hotels, flights, restaurants, and book them all for you.

Clearly, that’s extremely powerful. 

At the same time, you probably felt uncomfortable just reading that. You may have thought:

  • Is it actually able to contact restaurants? 

  • What if it books a bad hotel? 

  • How much should I monitor it as it takes these actions?

Those are legitimate doubts, and they are exactly what is drastically slowing adoption.

Overcoming these frictions is all about human-AI psychology, and what we tackled in this new Blueprint.

📘 Introducing: The Wharton Blueprint for AI Agent Adoption

We asked business leaders at companies with a combined workforce of 700,000+ employees what challenges they are seeing on the ground. Enterprise industry leaders like Google, ServiceNow, Wolters Kluwer, Workato, Concentrix, and Zapier.

Then, we analyzed the latest scientific research, along with insights from Wharton's faculty, to understand how to overcome the 3 main frictions that prevent people and organizations from fully adopting AI agents.

🧠 The 3 psychological frictions holding back adoption

There are 3 key psychological frictions holding back AI agent adoption:

  • Perceived Competence: "Do I believe the agent can actually do this?" 

  • Trust: "Should I trust it with this specific task?"

  • Delegation of Control: "How much autonomy should I actually give it?" 

We tackle them in depth in the Blueprint (which you can download here).

Here’s a preview of some of my favourite parts of the Blueprint 👇

🧱 Psychological friction 1: Perceived Competence

👀 Example insight on how to overcome lack of perceived competence:

❌ People were less willing to use AI agents that communicate with a friendly and warm tone of voice, leading them to believe that they are less competent.

✅ AI agents should clearly explain their reasoning and cite the criteria they will use to reach a decisions.

🧱 Friction 2: Trust

👀 Example insight on how to overcome lack of trust: 

❌ AI agents shouldn’t be vague and allow the user to guess an agent’s weaknesses.

✅ People have higher trust in AI agents when they clearly understand their limitations. Always be clear and upfront about what an agent can and cannot do.

🧱 Friction 3: Delegation of Control

👀 Example insight on how to overcome delegation of control: 

❌ AI agents shouldn’t be fully autonomous to make decisions without user input. On the other extreme, agents also shouldn’t need very specific guidance at every stage of the process.

✅ Agents should have a moderate level of autonomy, not too little, but not too much. The best balance is to present options and leave the final decisions up to the user.

⬇️ Get your free copy of the Blueprint

It’s taken many months of work to bring this Blueprint together, with contributions from 15 of the best minds in AI right now. But the best part for you? It’s 100% free.

P.S.: Want to learn more about this? I’m doing a live AI Horizons session at Wharton tomorrow. You can register to watch it here.