Trust in AI is more than a moral issue

Trust in AI is more than a moral issue

Join us as we return to New York on June 5 to work with executive leaders to explore comprehensive methods for auditing AI models for bias, performance, and ethical compliance in diverse organizations. Find out how you can be present here.


The economic potential of AI is indisputable, yet largely unrealized by organizations of astonishing size 87% of AI projects failing to succeed.

Some see this as a technology problem, others as a business problem, a culture problem or an industry problem – but the latest evidence shows it is a problem. to trust problem.

According to recent research, almost two-thirds of C-suite executives say that trust in AI drives sales, competitiveness and customer success.

Trust is a complicated word to explain when it comes to AI. Can you rely on one? AI system? If so, how? We don't immediately trust humans, and we are even less likely to immediately trust AI systems.

VB event

The AI ​​Impact Tour: the AI ​​audit

Join us as we return to New York on June 5 to engage with top executives and delve into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across organizations. Secure your attendance for this exclusive invitation-only event.

Request an invitation

But one lack of confidence in AI holds back economic potential, and many of the recommendations for building trust in AI systems have been criticized for being too abstract or far-reaching to be practical.

It is time for a new “AI Trust Equation” aimed at practical application.

The AI ​​trust equation

The Trust Equation, a concept for building trust between people, was first proposed in The trusted advisor by David Maister, Charles Green and Robert Galford. The equation is trust = credibility + reliability + intimacy, divided by self-orientation.

It's obvious at first glance why this is an ideal equation for building trust between humans, but it doesn't translate into building trust between humans and machines.

To build trust between people and machinesis the new AI trust equation Trust = Security + Ethics + Accuracy, divided by Control.

Security is the first step towards trust, and consists of several key principles that are well explained elsewhere. Building trust between humans and machines comes down to the question: “Will my information be safe if I share it with this AI system?”

Ethics is more complicated than safety because it is a moral question rather than a technical one. Before investing in an AI system, leaders should consider the following:

  1. How were people treated in the making of this model, such as the Kenyan workers when creating ChatGPT? Is that something I/we feel comfortable supporting by building our solutions with it?
  2. Is the model explainable? If it produces a malicious output, can I understand why? And is there anything I can do about it (see Control)?
  3. Are there implicit or explicit biases in the model? This is a thoroughly documented problem, as the Gender shades research by Joy Buolamwini and Timnit Gebru and Google's recent attempt to eliminate bias in their models, which resulted in the creation of ahistorical prejudices.
  4. What is the business model for this AI system? Are those whose information and life's work trained the model compensated when the model built on their work generates revenue?
  5. What are the values ​​of the company that created this AI system, and how well do the company's actions and its leadership align with these values? OpenAI's recent choice to imitate Scarlett Johansson's voice without her consent, for example, shows a significant gap between OpenAI's stated values ​​and Altman's decision to ignore Scarlett Johansson's choice to decline the use of her voice for ChatGPT.

Accuracy can be defined as how reliably the AI ​​system accurately answers a series of questions throughout the workflow. This can be simplified to: “If I ask this AI a question based on my context, how useful is the answer?” The answer is directly intertwined with 1) the sophistication of the model and 2) the data it is trained on.

Control is at the heart of the conversation about trust in AI, and ranges from the most tactical question: “Will this AI system do what I want it to do, or will it make a mistake?” to one of the most pressing questions of our time: “Will we ever lose control of intelligent systems?” In both cases, the ability to control the actions, decisions and output of AI systems supports the idea of ​​trusting and implementing them.

5 steps to use the AI ​​trust equation

  1. Determine if the system is useful: Before investing time and resources in researching whether a AI platform is reliable, organizations would benefit from determining whether a platform is useful in helping them create more value.
  2. Investigate whether the platform is safe: What happens to your data when you load it into the platform? Does information leave your firewall? Working closely with your security team or hiring security consultants is critical to ensuring you can trust the security of an AI system.
  3. Determine your ethical threshold and evaluate all systems and organizations against it: if models you invest in must be explainable, define with absolute precision a common, empirical definition of explainability within your organization, with upper and lower acceptable limits, and proposed measures against systems these boundaries. Do the same for any ethical principle that your organization considers non-negotiable when it comes to leveraging AI.
  4. Define your accuracy goals and don't deviate: It can be tempting to use a system that doesn't perform well because it's a precursor to human work. But if performance is below the accuracy target you have defined as acceptable for your organization, you risk poor quality work and increased burden on your people. More often than not, low accuracy is a model problem or a data problem, both of which can be addressed with the right level of investment and focus.
  5. Determine what level of control your organization needs and how it is defined: How much control you want decision makers and operators to have over AI systems will determine whether you want a fully autonomous system, semi-autonomous, AI-powered, or whether your organization The level of tolerance for sharing control with AI systems is higher than any current AI systems could achieve.

In the age of AI it can be easy to look for best practices or quick wins, but the truth is: no one has all of this figured out yet, and by the time they do, it won't be differentiator for you . and your organization no longer.

So, instead of waiting for the perfect solution or following the trends of others, take the lead. Build a team of champions and sponsors within your organization, tailor the AI ​​Trust Equation to your specific needs and start evaluating AI systems accordingly. The rewards of such an undertaking are not only economic, but also fundamental to the future of technology and its role in society.

Some tech companies see market forces moving in this direction and are working to develop the right commitments, control and visibility into how their AI systems operate – such as with Salesforce's Einstein trust layer – and others argue that any level of visibility would lose competitive advantage. You and your organization will need to determine the level of trust you want to have, both in the output of AI systems and in the organizations that build and maintain them.

The potential of AI is enormous, but will only be realized when AI systems and the people who create them can achieve and maintain trust within our organizations and society. The future of AI depends on it.

Brian Evergreen is the author of Autonomous Transformation: Creating a More Human Future in the Era of Artificial Intelligence.”

DataDecision Makers

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including the technical people who do data work, can share data-related insights and innovation.

If you would like to read more about cutting-edge ideas and information, best practices and the future of data and data technology, join DataDecisionMakers.

You might even consider it contribute an article of your own!

Read more from DataDecisionMakers