How to prepare your workforce to think like AI professionals

Discover how companies are integrating AI into production responsibly. This invitation-only event in SF explores the intersection of technology and business. Find out how you can get involved here.


If you suddenly feel the urge to smile when you see this stone, you are in good company.

As humans, we often irrationally describe human-like behavior to objects with some, but not all, of their characteristics (also known as anthropomorphism) – and we see this happening increasingly with AI.

In some cases, anthropomorphism is akin to saying “please” and “thank you” while interacting with a chatbot or praising people. generative AI when the output matches your expectations.

But etiquette aside, the real challenge here is when you see AI “reasoning” with a simple task (like summarizing this article) and then expecting it to do the same effectively in an anthology of complex scientific articles. Or when you see a model generating a response on Microsoft's recent earnings call and expect it to conduct market research by feeding the model the same earnings transcripts from ten other companies.

VB event

The AI ​​Impact Tour – San Francisco

Join us as we navigate the complexities of responsibly integrating AI into business on the next stop of VB's AI Impact Tour in San Francisco. Don't miss the opportunity to gain insights from industry experts, network with like-minded innovators, and explore the future of GenAI with customer experience and business process optimization.

Request an invitation

These seemingly similar tasks are actually very different for models, because, as Cassie Kozyrkov says places it“AI is as creative as a paintbrush.”

The biggest barrier to productivity with AI is the human ability to use it as a tool.

Anecdotally, we've heard from customers who rolled out Microsoft Copilot licenses and then scaled back the number of licenses because individuals didn't feel it added value.

Chances are that those users had a discrepancy between expectations between the problems that AI can solve well and reality. And sure, the polished demos look magical, but AI isn't magic. I'm very familiar with the disappointment you feel after the first time you realize, “Oh, AI is not good for that.'

But instead of throwing up your hands and quitting gene AI, you can work on building the right intuition to understand AI/ML more effectively and avoid the pitfalls of anthropomorphism.

Defining intelligence and reasoning for machine learning

We've always had a poor definition of intelligence. If a dog begs for a treat, is that intelligent? What about when a monkey uses a tool? Is it intelligent that we intuitively know to keep our hands away from heat? If computers do the same things, are they intelligent?

I was (all 12 months ago) in the camp against admitting that major language models (LLMs) could 'reason'.

However, in a recent discussion with some trusted AI founders, we proposed a possible solution: a rubric to describe levels of reasoning.

Just as we have rubrics for reading comprehension or quantitative reasoning, what if we could introduce an AI equivalent? This could be a powerful tool for communicating to stakeholders the expected level of 'reasoning' of an LLM-driven solution, along with examples of what is not realistic.

People have unrealistic expectations of AI

We tend to be more forgiving when it comes to human error. In fact, self-driving cars are statistically safer than humans. But when accidents happen, there is commotion.

This compounds the disappointment when AI solutions fail to perform a task you expected a human to perform.

I hear a lot of anecdotal descriptions of AI solutions as a vast army of 'interns'. And yet machines still fail in ways that humans don't, while far surpassing them at other tasks.

Knowing this, it's not surprising that we see this less than 10% of organizations that successfully develop and implement gene AI projects. Other factors, such as misalignment with corporate values ​​and unexpectedly costly data management efforts, only compound the challenges companies face with AI projects.

One of the keys to overcoming these challenges and increasing project success is to give AI users better intuition about when and how to use AI.

Using AI training to build intuition

Training is the key to dealing with the problems rapid evolution of AI and redefining our understanding of machine learning (ML) intelligence. AI training itself can sound quite vague, but I've found that dividing it into three different categories has been helpful for most companies.

  1. Security: How to use AI safely and avoid new and AI-enhanced phishing scams.
  2. Literacy: Understanding what AI is, what to expect from it and how it can break.
  3. Readiness: Knowing how to skillfully (and efficiently) use AI-powered tools to perform higher quality work.

Protecting your team with AI safety training is like arming a new cyclist with knee and elbow pads: it may prevent a few abrasions, but it won't prepare them for the challenges of intense mountain biking. Meanwhile, AI readiness training ensures that your team uses AI and ML optimally.

The more you give your workforce the opportunity to interact securely with generational AI tools, the more they will build the right intuition for success.

We can only guess what capabilities will be available over the next twelve months, but if you can bring them back to the same rubric (levels of reasoning) and know what to expect as a result, you can only better prepare your workforce for success.

Know when to say “I don't know,” know when to ask for help – and, most importantly, know when a problem is beyond the scope of a given AI tool.

Cal Al-Dhubaib is head of AI and data science at Further.

DataDecision Makers

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including the technical people who do data work, can share data-related insights and innovation.

If you would like to read more about cutting-edge ideas and information, best practices and the future of data and data technology, join DataDecisionMakers.

You might even consider it contribute an article of your own!

Read more from DataDecisionMakers