Can machines learn to explain their decisions?

Executive Summary

Artificial intelligence, or AI – machine functions that exhibit abilities of the human mind such as learning and problem solving – is in a phase of rapid development. U.S.-based and international technology companies are in an intense competition to develop AI systems that can automate tasks in a diverse range of industries, from entertainment to marketing to human resources. However, AI systems are so complex that even their developers are sometimes uncertain about how they make decisions. This has raised unique ethical and regulatory challenges, as researchers point to evidence that AI systems can be biased in favor of certain groups of people for jobs, services or loans. Many researchers are calling for greater transparency and for a legal “right to explanation” for those affected by AI decisions.

Here are some key takeaways:

  • Investments in AI are being fueled by tech giants, such as Google and Baidu; companies invested more than $20 billion in AI-related mergers and acquisitions globally last year.

  • There has also been an upswing in research into AI systems such as deep learning, with most of it published in China and the United States.

  • AI’s decision-making in high-stakes fields such as hiring and criminal justice has elevated ethical concerns about bias and accountability.

  • Click here to listen to an interview with author Hannah H. Kim or click here for the transcript.

Looks like you do not have access to this content.

Please login or find out how to gain access.

Resources for Further Study



Domingos, Pedro, “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World,” Basic Books, 2015. A computer science professor at the University of Washington provides an introduction to machine learning and its applications in business.

Eubanks, Virginia, “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor,” St. Martin’s Press, 2018. An associate professor of political science at the University at Albany, SUNY, investigates the effects of data tracking and automated decision-making on poor and working-class people.

O’Neil, Cathy, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” Crown, 2016. A mathematician and data scientist who earned her doctorate in math from Harvard University examines how unregulated and uncontestable black-box AI systems can reinforce discrimination.


Angwin, Julia, et al., “Machine Bias,” ProPublica, May 23, 2016, Reporters uncover racial bias evident in algorithmic-based scoring systems that are used to inform decisions in courtrooms nationwide.

Bass, Dina, and Ellen Huet, “Researchers Combat Gender and Racial Bias in Artificial Intelligence,” Bloomberg, Dec. 4, 2017, An article profiles AI researchers who are aiming to combat machine-learning bias affecting women and minorities.

Simonite, Tom, “Artificial Intelligence Seeks an Ethical Conscience,” Wired, Dec. 7, 2017, A journalist summarizes discussions about ethics among leading AI researchers at a conference in Long Beach, Calif.

Reports and Studies

“Artificial Intelligence: The Next Digital Frontier?” McKinsey & Company, June 2017, A report by a global consulting firm examines AI investments and commercial applications.

“The National Artificial Intelligence Research and Development Strategic Plan,” Executive Office of the President, National Science and Technology Council, Committee on Technology, October 2016, A report by a White House interagency working group provides national strategy recommendations for AI research and development.

“Preparing for the Future of Artificial Intelligence,” Executive Office of the President, National Science and Technology Council, Committee on Technology, October 2016, Another White House interagency report surveys the current progress of AI and addresses concerns regarding its impact on society, business and public policy.

Campolo, Alex, et al., “AI Now 2017 Report,” AI Now Institute, January 2018, A nonprofit research institute at New York University provides an overview of current ethics issues in AI and offers recommendations for future research.

The Next Step

AI Bias

Delaney, John K., “France, China, and the EU All Have an AI Strategy. Shouldn’t the US?” Wired, May 20, 2018, The United States needs to catch up to other countries in AI regulation, including oversight aimed at guaranteeing fair and unbiased implementation, says a Maryland congressman.

Knight, Will, “Microsoft is creating an oracle for catching biased AI algorithms,” MIT Technology Review, May 25, 2018, Microsoft and Facebook are working on an AI algorithm designed to detect instances of bias in response to growing concern over the technology’s tendency to mirror societal prejudices.

Locascio, Robert, “Thousands of Sexist AI Bots Could Be Coming. Here’s How We Can Stop Them,” Fortune, May 10, 2018, AI developers should work harder to diversify their workforces to combat potentially biased, sexist programming seeping into the AI-user relationship, says the CEO of a tech company.

Facial Recognition

Chutel, Lynsey, “China is exporting facial recognition software to Africa, expanding its vast database,” Quartz, May 25, 2018, As part of its partnership with a China-based facial recognition company, Zimbabwe has agreed to supply the company with personal data and to help improve the technology’s ability to recognize differences among people of different ethnicities.

Greig, Jonathan, “Welsh police facial recognition software has 92% fail rate, showing dangers of early AI,” TechRepublic, May 8, 2018, The high failure rate of the Welsh police force’s facial recognition program highlights growing concerns among watchdog groups that the technology lacks government oversight and regulation.

Wren, Ian, and Scott Simon, “Body Camera Maker Weighs Adding Facial Recognition Technology,” NPR, May 12, 2018, The leading supplier of body cameras for law enforcement is considering adding a facial recognition feature in an effort to stay competitive in the market.


AI Now Institute
60 5th Ave., 7th Floor, New York, NY 10011
Research institute at New York University that examines the social implications of artificial intelligence.

Berkman Klein Center for Internet & Society
23 Everett St., #2, Cambridge, MA 02138
Research center at Harvard University that serves, along with the MIT Media Lab, as an anchor institution of the Ethics and Governance of Artificial Intelligence Fund, a $27 million fund created in 2017 to advance AI research for the public interest.

Center for Human-Compatible Artificial Intelligence
University of California, Berkeley, CA 94720-1234
Research center whose mission is creating beneficial AI systems by incorporating elements from the social sciences.

Institute of Electrical and Electronics Engineers
3 Park Ave., 17th Floor, New York, NY 10016-5997
Professional organization that developed an ethics initiative for autonomous and intelligent systems.

MIT Media Lab
77 Massachusetts Ave., E14/E15, Cambridge, MA 02139-4307
Research laboratory at the Massachusetts Institute of Technology.

Partnership on AI to Benefit People and Society
215 2nd St., Suite 200, San Francisco, CA 94105
Nonprofit technology industry consortium established to formulate best practices on AI technologies.

White House Office of Science and Technology Policy
1650 Pennsylvania Ave., Washington, DC 20504
Federal office established by Congress in 1976 to advise the president and others within the Executive Office of the President on science and technology.

DOI: 10.1177/237455680418.n1