Join executives from July 26-28 for Transform’s AI & Edge Week. Hear top leaders discuss topics around AL/ML technology, conversational AI, IVA, NLP, Edge and more. Book your free pass now!
Responsible artificial intelligence (AI) should be embedded in a company’s DNA.
“Why is bias in AI something we all need to think about today? It’s because AI is the fuel for everything we do today,” Miriam Vogelchairman and CEO of EqualAItold a livestream audience at this week’s Transform 2022 event.
Vogel discussed the topics of AI bias and responsible AI in depth in a fireside conversation led by trade group Victoria Espinel The Software Alliance.
Vogel has extensive experience in technology and policy, including at the White House, the United States Department of Justice (DOJ), and the nonprofit EqualAI, which is dedicated to reducing unconscious biases in the development and use of AI. She also chairs the recently launched National AI Advisory Committee (NAIAC), mandated by Congress to advise the president and the White House on AI policy.
As she noted, AI is becoming increasingly important to our everyday lives – and vastly improving – but at the same time, we need to understand the many inherent risks of AI. Everyone – builders, makers and users alike – needs to make AI ‘our partner’, but also efficient, effective and reliable.
“You can’t build trust with your app if you’re not sure if it’s safe for you, or if it’s built for you,” says Vogel.
This is the time
We need to tackle the problem of responsible AI now, Vogel said, as we are still setting “the rules of the road.” What AI is remains a kind of ‘grey area’.
And if it is not addressed? The consequences can be serious. People may not receive proper care or employment due to: AI biasand “lawsuits will come, regulations will come,” Vogel warned.
If that happens, “we can’t unpack the AI systems that we’ve become so dependent on and that have become intertwined,” she said. “Right now, today, is the time for us to be very aware of what we’re building and implementing, to make sure we assess the risks and make sure we mitigate those risks.”
Good ‘AI hygiene’
Companies must address responsible AI now by establishing strong governance practices and policies and building a secure, collaborative, visible culture. This needs to be “pushed through the handles” and handled with care and intention, Vogel said.
When hiring, for example, companies can start by simply asking if platforms have been tested for discrimination.
“Just that fundamental question is so extremely powerful,” Vogel said.
of an organization HR team must be supported by AI that is inclusive and does not exclude the best candidates from employment or promotion.
It’s a matter of ‘good AI hygiene’, says Vogel, and that starts with the C-suite.
“Why the C suite? Because ultimately, if you don’t buy in at the highest level, you can’t get the governance framework in place, you can’t get an investment in the governance framework, and you can’t buy in to make sure you get it right. way,” says Vogel.
In addition, detecting bias is an ongoing process: once a framework is established, there must be a long-term process to continuously assess whether bias is interfering with systems.
“Bias can be embedded at any human touchpoint,” from data collection to testing, to design, to development and implementation, Vogel said.
Responsible AI: A Human Level Problem
Vogel pointed out that the conversation about AI bias and AI responsibility was initially limited to programmers, but Vogel thinks it’s “unfair”.
“We can’t expect them to solve humanity’s problems on their own,” she said.
It’s human nature: people often envision themselves as broadly as their experience or creativity allows. So the more voices that can be brought in, the better to determine best practices and ensure that the age-old problem of bias doesn’t infiltrate AI.
This is already underway, with governments around the world drafting regulatory frameworks, Vogel said. The EU creates a GDPR-like regulations for AI, for example. In addition, in the US, the Equal Employment Opportunity Commission and the DOJ recently came up with an “unprecedented” joint statement about reducing discrimination when it comes to disabilities – something that AI and its algorithms can exacerbate if left unchecked. The National Institute of Standards and Technology also instructed Congress to risk management framework for AI.
“We can expect a lot from the US in terms of AI regulation,” says Vogel.
This also applies to the recently formed committee she now chairs.
“We are going to make an impact,” she says.
Miss the . not full conversation of the Transform 2022 event.
The mission of VentureBeat is a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more about membership.