You are currently viewing Four Trust Types That Make or Break AI Projects
  • Reading time:5 mins read
  • Post category:INSEAD Knowledge
image

These employees saw strategic applications beyond the tool’s basic functions. One of them noted: “You can observe who you’re collaborating with, as well as who you’re not collaborating with… you can consider your own conduct and determine what kind of individuals you need to work with.”

Emotionally, these employees felt positive about AI: “I think it’s where the world is going, and for me… if I’m working now and I’m being paid, why shouldn’t it be transparent?” Significantly, employees with full trust didn’t change their digital behaviours, providing the AI with accurate data needed for optimal performance.

2. Uncomfortable trust: High cognitive trust and low emotional trust

The second form of trust involved employees who recognised the tool’s value but worried about its implications. One manager said: “That’s a wonderful idea that you would somehow be able to figure out who would be the best expert for this… But at the same time, just when you may have started with the positive potentials, you may not have noticed these negative potentials.”

Many feared the potential misuse of data: “There is always the worry that those data will be used for something else I don’t have any control over… For example, against us. Now, it’s focused on people, not from the management side. But I guess companies want to be more efficient, and – well, there’s a fine line.”

To handle this cognitive-emotional conflict, these employees began to be wary of their digital footprints. They limited the information visible to the AI by marking calendar events as private or using generic descriptions. 

3. Blind trust: Low cognitive trust and high emotional trust

Some employees questioned the AI tool’s competence while still feeling comfortable with it. As one interviewee said: “I sometimes feel like it is not tracking the amount of time I’ve spent on either technology properly.” Another said the map generated by the tool did not accurately reflect the expertise of some colleagues. “It was hard to find the person with actual knowledge.”

Despite these concerns, they didn’t feel threatened by the technology: “I am not concerned about sharing information because I know that the information… is information that generally could benefit other people to find as well.”

Interestingly, these employees responded by detailing their digital footprints. They added more information to their calendars, project entries and online discussions to help improve the tool’s performance. As one employee explained: “Let’s take a step forward and provide the necessary details to make our tool more efficient.”

4. Full distrust: Low cognitive and emotional trust 

Employees with full distrust neither believed in the tool’s capabilities nor felt comfortable with it. They described negative experiences (“I tried using [the tool], and nothing worked at all”) and questioned its fundamental approach (“We shouldn’t trust only data or digital services to make decisions”).

These employees also experienced negative emotions, particularly fear. One confided: “I feel that it is dangerous. My fear is that it may be the misuse of data. They [the collected data] are used against you in some cases.”

Their responses were the most damaging to the AI system – either withdrawing their data entirely (“I just opt out”) or actively manipulating their digital footprints by using certain keywords to shape how they appeared in the system.

These behaviours created a vicious cycle. When employees withdrew, confined or manipulated their digital footprints, the AI received imbalanced or inaccurate data, decreasing its performance. As one interviewee noted: “Some experts disappeared from the visual map.”

Lower performance reduced trust further, leading to decreased usage until eventually, the project failed.

How to make your AI initiative stick

If there’s one key insight from the study, it is that a people-centric approach that acknowledges both thinking and feeling dimensions of trust is essential. Trust is not just a monolithic, one-size-fits-all concept.

For starters, leaders introducing an AI tool to the workplace should provide comprehensive training that explains how AI works, its capabilities and its limitations. Such efforts build cognitive trust. Leaders should also develop and communicate clear AI policies that define what data will be collected and how it will be used. This helps employees understand the tool’s role, what it’s capable of and, as importantly, how employees’ concerns and personal protection will be addressed. When people feel at ease, they are more likely to form emotional trust.

This brings us to the point of managing expectations about AI performance. Managers should encourage patience during early stages when results may be inconsistent. Celebrate AI-driven achievements or improvements to demonstrate progress and reinforce the value of the AI initiative.

The study also shows that leaders must address feelings, not just facts. Share your own enthusiasm about AI’s potential benefits. Create psychological safety by encouraging the open expression of concerns about AI. Address anxieties with empathy rather than dismissal. When employees feel their emotions are acknowledged, they’re more likely to develop positive connections with new technologies.

Remember: True AI transformation starts not with algorithms, but with a sophisticated understanding of various forms of trust and fostering them as part of your AI initiative.

INSEAD Knowledge

“INSEAD, a contraction of “Institut Européen d’Administration des Affaires” is a non-profit graduate-only business school that maintains campuses in Europe, Asia, the Middle East, and North America.”

 

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.