Rob Reich: AI developers need a code of conduct

Rob Reich: AI developers need a code of conduct

We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today


Rob Reich wears many hats: political philosopher, director of the McCoy Family Center for Ethics in Societyand deputy director of the Stanford Institute for Human-Centric Artificial Intelligence

Reich has in recent years delved deeply into the ethical and political issues through revolutionary technological advances in artificial intelligence (AI). His work is not always easy to hear for technologists. In his book, System Error: Where Big Tech Went Wrong and How We Can RebootReich and his co-authors (computer scientist Mehran Sahami and social scientist Jeremy M. Weinstein) argued that technology companies and developers are so fixated on “optimization” that they often trample on human values.

More recently, Reich argues that the AI ​​community is lagging far behind in development robust professional standards. This poses risks to many democratic values, from privacy and civil rights to protection against harm and exploitation.

He spoke about the importance of community norms at the Spring 2022 HAI Conference on Major Advances in AI

In an interview he discussed what this professional code could look like and who should be involved.

The need for maturity in AI ethics

You say that AI and computer science in general are “immature” in their professional ethics. What do you mean?

Rob Rich: AI science is like a late-stage teen, newly aware of his extraordinary powers, but without a fully developed frontal cortex that could direct his risky behavior and cause him to consider his wider social responsibilities. Computing didn’t come into existence until the 1950s and 1960s, and those with a computer science degree didn’t become socially powerful until the 2000s. Compared to older fields such as medicine or the law — or even horticultural professions that have licensing requirements — institutional standards of professional ethics in computer science are developmentally lagging.

What kind of ethics and standards are lacking in AI?

Empire: Think about what happened to another technological leap – CRISPR, the gene-editing tool that has created transformative opportunities in fields from therapies to agriculture. One of the co-inventors, Jennifer Doudna, who shared a Nobel Prize in Chemistry, told the story of waking up one night from a nightmare and wondering: What would happen if Hitler had this? She decided that biomedical researchers needed to put some limitations on the technique, and she helped her fellow biomedical researchers and their respective professional associations convene. They approved a moratorium on the use of CRISPR for germline editing (on human eggs, sperm or embryos).

A few years later, when one researcher actually used CRISPR on human embryos, he was immediately banned by other scientists and barred from any professional meeting. No magazine would publish its articles. In fact, the Chinese government eventually put him in jail.

Can you name AI scientists whose AI model led to them being banned from the respectable practice of AI science? My experience is that hardly anyone can do that. Imagine a person developing an AI model that looks at your facial print and predicts the probability that you will commit a crime. That seems to me the equivalent of phrenology and the discredited practice of racial science† But right now I feel like such work would cost anyone nothing in terms of professional opportunities.

AI has nothing to compare with the footprint of ethics in healthcare and biomedical research. Every hospital has an ethics committee. If you want to do biomedical research, you have to pass an institutional committee. If you’re tinkering with a new drug in your garage, you can’t just try people around you — the FDA has to approve trials. But if you have an AI model, you can train it however you want, deploy it however you want, and even openly share the model with other potential bad actors to use it as well.

Individual companies have naturally developed codes of conduct. But unless business practices spill over into industry-wide practices, or professional standards for all responsible researchers wherever they work, business ethics standards aren’t much. They don’t change or bad practices happen elsewhere, which is why society is no better off for the gold star attached to an individual company.

Drafting an AI Code of Ethics

What are the benchmarking principles that can underlie a code of ethics or an AI rights statement?

Empire: Some standards from healthcare and biomedical research offer a starting point, although I don’t believe you can just wholesale such standards from medicine to AI.

Take, for example, the Hippocratic Oath – first, do no harm. In AI, researchers and developers can have strong standards for understanding how algorithmic models can harm marginalized groups before a model is released or implemented.

They could have standards on privacy rights, building on human rights doctrines, that limit the widespread practice of scraping personal data from the open internet without first obtaining consent.

They could develop standards that place appropriate restrictions on how facial recognition tools are deployed in public. In biometrics, you can point to some basic human interests in surveillance, be it carried by a drone, a police camera, or a man with a cell phone.

What are some actionable ideas to create real traction for a code of ethics?

Empire: First, just as happened with CRISPR, it is important for leading AI scientists to speak out for professional ethics and a broader code of responsible AI. High-quality AI scientists are essential for the development of responsible AI.

Second, beyond the actions of individuals, we need a more institutionally robust approach. Responsible AI is not just a matter of internal regulation through professional standards, but external regulation through algorithmic audit firms and appropriate civil society organizations that can hold companies accountable. The work of the Algorithmic Justice League is an exemplary example of the latter.

We don’t necessarily have to set up or invent new agencies. For example, we already have the Equal Employment Commission. If they don’t already, they should look into how some of these AI-powered recruiting tools and resume screening systems work.

We could also have some analog-to-institutional review committees overseeing research involving human subjects. When someone decides to scrape images from the internet in order to identify criminal tendencies based on photos and facial prints i wonder what would have happened if they had gone through an institutional review committee. Maybe he had said no. But if you’re an AI scientist, you usually don’t have to deal with an institutional review committee. You just go out and do it.

Again, that’s where institutional standards need to catch up to the power of AI.

Add checks and balances

Should developers be required to audit for potential biases or other dangers?

Empire: Naturally. Every major construction project must have an environmental impact assessment. If it turns out that you’re going to develop a piece of land that threatens an endangered species, the developers should at least implement mitigation strategies before moving forward. Analogously, you could imagine your algorithmic impact statements. You would have to demonstrate that there is minimal risk of bias before it is put into practice. There are also technical approaches to this, such as using model cards and datasheets for datasets

We also need to significantly up-skill the talent put into algorithmic auditing firms. I hope that technical career paths extend beyond startups and big-tech companies. Think of the public interest. Why is it more competitive to get a low paying job at the Department of Justice than a corporate law job? At least in part because of the ability to do something for the common good.

What does it take to establish the kind of professional or community standards you envision?

Empire: Unfortunately, it often takes scandals like the Nazi-era medical experiments or the Tuskegee experiments on black men to provoke a significant response from both policy makers and the profession.

But it doesn’t have to be a reactive process. I would rather see AI science take a proactive approach.

An example is a recent blog post from members of the Center for Research on Foundation Models that argued for the establishment of an assessment committee that would set standards for the responsible release of foundation models.

Another example is a pilot project here at Stanford HAI that requires an Ethics and Society Review for every project applied for. The review panel consists of an interdisciplinary team of experts from anthropology, history, medicine, philosophy and other fields. Just last December, members of the team published a paper in Proceedings of the National Academy of Sciences describing the findings and how the ESR can be applied to other areas of research in both industry and academia.

It is a well-known pattern in history that scientific discoveries and technological innovation predate our collective capacity to install sensible regulatory guidelines. In System errorwe call this the race between disruption and democracy† With AI, the pace of innovation has accelerated and the innovation frontier is far ahead of our public policy frameworks. This makes it increasingly important to rely on professional standards and codes of conduct, so that the development and deployment of new technologies in AI is pursued with social responsibility.

Edmund L. Andrews is a contributing writer for the Stanford Institute for Human-Centered AI

Rob Reich is a professor of political science at the Stanford School of Humanities and Sciences, and a professor of Education courtesy. He is also a senior fellow of the Freeman Spogli Institute for International Studies and associate director of the Stanford Institute for Human-Centered AI.

This story originally appeared on hai.stanford.edu† Copyright 2022

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contribute an article of your own!

Read more from DataDecisionMakers