Building Responsible AI: Five Pillars for an Ethical Future

We are excited to bring Transform 2022 back in person from July 19th, effectively July 20th to 28th. Join an AI and data leader for insightful talk and exciting networking opportunities. Register today!


As long as there was technological progress, there were concerns about its implications. The Manhattan Project is a classic example of when scientists tackled their role in unleashing such innovative yet devastating nuclear power. Sir Solomon “Solly” Zuckerman was an Allied scientific adviser during World War II and later a prominent advocate of nuclear non-proliferation. He was quoted in the 1960s with visionary insights that are still true today. “Science creates the future without knowing what the future will be,” he said.

Artificial intelligence (AI) is a collective term for machine learning (ML) software designed to perform complex tasks that require human intelligence today and is destined to play a major role in society in the future. The recent surge has exploded interest, scrutinizing how AI is being developed and who is developing it, and revealing how bias affects design and functionality. .. The EU is planning new legislation aimed at mitigating the potential harm that AI can cause, and responsible AI is required by law.

It’s easy to see why such guardrails are needed. Since humans are building AI systems, they will inevitably incorporate their own ethics into their designs. Some nasty examples have already been revealed. Apple card algorithms and Amazon jobs are each investigated for gender prejudice, and Google [subscription required] After tagging racists, the photo service had to be modified. Since then, companies have fixed the problem, but the lesson is that the technology is advancing rapidly and building good technology without risk consideration is like blindfolding and sprinting. I’m emphasizing.

Building Responsible AI

Melvin Greer, Intel’s chief data scientist, points out in VentureBeat: It is claimed, but it does something in the context of a broader perspective that recognizes social norms and morals. ”

In other words, the person designing the AI ​​system must be responsible for their choices and essentially “do the right thing” when it comes to software implementation.

If your company or team is embarking on building or embedding an AI system, here are five pillars that need to form the foundation:

1. Accountability

Humans may think that AI design is taken into account from the beginning, but unfortunately this is not always the case. Engineers and developers can easily get lost in the code. But the big question that comes up when humans get caught up in a loop is “how much trust do you have in your ML system to start making decisions?”

The most obvious example of this importance is self-driving cars. Here, we “trust” the vehicle to “know” what the right decision is for a human driver. However, in other scenarios, such as lending decisions, designers still need to consider which indicators of fairness and bias are associated with the ML model. A wise best practice to implement is to create an ongoing AI ethics committee to help oversee these policy decisions, encourage audits and reviews, and keep up with modern social standards. is.

2. Reproducibility

Most organizations rely on data from a variety of sources (data warehouses, cloud storage providers, etc.), but if that data isn’t uniform (meaning 1: 1), it’s a problem when trying to collect it. May occur. Insights for solving problems and updating features. For companies developing AI systems, it is important to standardize the ML pipeline and establish a comprehensive data and model catalog. This not only streamlines testing and validation, but also improves the ability to create accurate dashboards and visualizations.

3. Transparency

As with most, transparency is the best policy. When it comes to the ML model, transparency is equivalent to interpretability (that is, to be able to explain the ML model). This is especially important in sectors such as banking and healthcare. This sector needs to be able to explain and justify to customers why they are building these particular models to ensure fairness against unwanted biases. That is, if an engineer cannot justify why a particular ML feature exists for the benefit of the customer, it should not. This is where monitoring and metrics play a major role, and it is important to monitor statistical performance to ensure the long-term effectiveness of AI systems.

4. Security

In the case of AI, security is more focused on how enterprises protect their ML models and typically includes technologies such as encrypted computing and hostile testing. This is because the AI ​​system is not responsible if it is vulnerable to attack. Consider this real-life scenario. There was a computer vision model designed to detect a stop sign, but when someone put a small sticker on the stop sign (not identifiable by the human eye), the system was fooled. Such examples can have a significant impact on safety, so you should always pay attention to security to prevent such flaws.

5. Privacy

This last pillar has always been a hot button issue, especially in many of the ongoing Facebook scandals related to customer data. Because AI collects vast amounts of data, it requires very clear guidelines on what AI is used for. (Think of the European GDPR.) Aside from government regulations, companies designing AI need to make privacy a top priority and generalize their data so that they don’t store individual records. This is especially important in the medical and sensitive patient data industries. For more information, see Technologies such as Federated Learning and Differential Privacy.

Responsible AI: the future path

Even after considering these five pillars, AI’s responsibility can still feel like a whac-a-mole situation. Another nuance emerges when we consider technology to work ethically. This is just one part of the process of instilling exciting new technologies around the world, and like the Internet, we will never stop discussing, tinkering with, and improving the capabilities of AI.

However, there is no mistake. The impact of AI is enormous and has a lasting impact on multiple industries. A good way to get ready now is to focus on building diverse teams within your organization. Bringing people of different races, genders, backgrounds, and cultures can reduce the potential for prejudice before looking at technology. Engaging more people in the process and practicing continuous monitoring ensures that AI is more efficient, ethical, and responsible.

Dattaraj Rao is Persistent’s Chief Data Scientist...

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where professionals, including engineers working with data, can share data-related insights and innovations.

Join us at Data Decision Makers to read about cutting-edge ideas and updates, best practices, and the future of data and data technology.

You may also consider contributing your own article!

Read more from DataDecisionMakers