87% of organizations embrace generation AI, but far fewer assess the risks

87% of organizations embrace generation AI, but far fewer assess the risks


Sign up for our daily and weekly newsletters to stay up to date with the latest updates and exclusive content on industry-leading AI coverage. More information


A new study by PwC of 1,001 U.S.-based executives in business and technology roles 73% thinks that of respondents are currently using or planning to use generative AI in their organizations.

However, only 58% of respondents have begun to assess the risks of AI. For PwCResponsible AI concerns value, safety and trust and should be part of a company's risk management processes.

Jenn Kosar, US AI assurance leader at PwC, said VentureBeat Six months ago, it was acceptable for companies to implement AI projects without thinking about responsible AI strategies, but that is no longer the case.

“We’re further along in the cycle now, so it’s time to build responsible AI,” Kosar said. “Previous projects were internal and limited to small teams, but we’re now seeing large-scale adoption of generative AI.”

She added that AI pilots are an important contribution to responsible AI strategies, as companies can determine what works best for their teams and how they deploy AI systems.

Responsible AI and risk assessment have been in the news in recent days after Elon Musk's xAI has implemented a new image generation service through its Grok-2 model on the social platform X (formerly Twitter). Early Users report that the model appears to be largely unrestrictedallowing users to create all kinds of controversial and inflammatory content, including deepfakes of politicians and pop stars committing violent acts or finding themselves in overtly sexual situations.

Priorities to build on

Survey respondents were asked about 11 capabilities that PwC identified as “a subset of capabilities that organizations appear to be prioritizing most frequently today.” These included:

  1. Further training
  2. Hiring Embedded AI Risk Specialists
  3. Periodic training
  4. Data protection
  5. Data management
  6. Cyber ​​Security
  7. Model testing
  8. Model management
  9. Third party risk management
  10. Specialized software for AI risk management
  11. Monitoring and auditing

According to the PwC survey, more than 80% reported progress on these capabilities. However, 11% claimed to have implemented all 11, although PwC said, “We suspect that many of them are overestimating their progress.”

It added that some of these markers for responsible AI can be difficult to manage, which could be one reason why organizations struggle to fully implement them. PwC pointed to data governance, which should define AI models’ access to internal data and put safeguards around it. “Legacy” cybersecurity methods could be insufficient to protect the model itself from attack. such as model poisoning.

Accountability and responsible AI go hand in hand

To guide companies undergoing the AI ​​transformation, PwC has proposed ways to comprehensive responsible AI strategy.

One of those is creating ownership, which Kosar said was one of the challenges respondents had. She said it's important to ensure that accountability and ownership for responsible AI use and implementation is brought back to one executive. This means viewing AI safety as something that goes beyond technology and either having a Chief AI Officer or a responsible AI leader who works with various stakeholders within the company to understand business processes.

“Perhaps AI is the catalyst to bring technology and operational risk together,” Kosar said.

PwC also advises thinking about the full lifecycle of AI systems, looking beyond theory and implementing security and trust policies across the organization. Prepare for future regulations by further committing to responsible AI practices and developing a plan to be transparent with stakeholders.

According to Kosar, the most surprising thing about the survey was the comments from respondents who believed that responsible AI would add commercial value to their businesses, and she believes this will encourage more companies to think about it more deeply.

“Responsible AI as a concept is not just about risk, it also needs to be value-creating. Organizations said they see responsible AI as a competitive advantage, that they can base services on trust,” she said.