Can machines learn to explain their decisions?
Artificial intelligence, or AI – machine functions that exhibit abilities of the human mind such as learning and problem solving – is in a phase of rapid development. U.S.-based and international technology companies are in an intense competition to develop AI systems that can automate tasks in a diverse range of industries, from entertainment to marketing to human resources. However, AI systems are so complex that even their developers are sometimes uncertain about how they make decisions. This has raised unique ethical and regulatory challenges, as researchers point to evidence that AI systems can be biased in favor of certain groups of people for jobs, services or loans. Many researchers are calling for greater transparency and for a legal “right to explanation” for those affected by AI decisions.
Here are some key takeaways:
Investments in AI are being fueled by tech giants, such as Google and Baidu; companies invested more than $20 billion in AI-related mergers and acquisitions globally last year.
There has also been an upswing in research into AI systems such as deep learning, with most of it published in China and the United States.
AI’s decision-making in high-stakes fields such as hiring and criminal justice has elevated ethical concerns about bias and accountability.
Resources for Further Study
Domingos, Pedro, “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World,” Basic Books, 2015. A computer science professor at the University of Washington provides an introduction to machine learning and its applications in business.
Eubanks, Virginia, “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor,” St. Martin’s Press, 2018. An associate professor of political science at the University at Albany, SUNY, investigates the effects of data tracking and automated decision-making on poor and working-class people.
O’Neil, Cathy, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” Crown, 2016. A mathematician and data scientist who earned her doctorate in math from Harvard University examines how unregulated and uncontestable black-box AI systems can reinforce discrimination.
Angwin, Julia, et al., “Machine Bias,” ProPublica, May 23, 2016, http://tinyurl.com/
Bass, Dina, and Ellen Huet, “Researchers Combat Gender and Racial Bias in Artificial Intelligence,” Bloomberg, Dec. 4, 2017, http://tinyurl.com/
Simonite, Tom, “Artificial Intelligence Seeks an Ethical Conscience,” Wired, Dec. 7, 2017, http://tinyurl.com/
Reports and Studies
“Artificial Intelligence: The Next Digital Frontier?” McKinsey & Company, June 2017, http://tinyurl.com/
“The National Artificial Intelligence Research and Development Strategic Plan,” Executive Office of the President, National Science and Technology Council, Committee on Technology, October 2016, http://tinyurl.com/
“Preparing for the Future of Artificial Intelligence,” Executive Office of the President, National Science and Technology Council, Committee on Technology, October 2016, http://tinyurl.com/
Campolo, Alex, et al., “AI Now 2017 Report,” AI Now Institute, January 2018, http://tinyurl.com/
The Next Step
Delaney, John K., “France, China, and the EU All Have an AI Strategy. Shouldn’t the US?” Wired, May 20, 2018, https://tinyurl.com/
Knight, Will, “Microsoft is creating an oracle for catching biased AI algorithms,” MIT Technology Review, May 25, 2018, https://tinyurl.com/
Locascio, Robert, “Thousands of Sexist AI Bots Could Be Coming. Here’s How We Can Stop Them,” Fortune, May 10, 2018, https://tinyurl.com/
Chutel, Lynsey, “China is exporting facial recognition software to Africa, expanding its vast database,” Quartz, May 25, 2018, https://tinyurl.com/
Greig, Jonathan, “Welsh police facial recognition software has 92% fail rate, showing dangers of early AI,” TechRepublic, May 8, 2018, https://tinyurl.com/
Wren, Ian, and Scott Simon, “Body Camera Maker Weighs Adding Facial Recognition Technology,” NPR, May 12, 2018, https://tinyurl.com/
AI Now Institute
60 5th Ave., 7th Floor, New York, NY 10011
Research institute at New York University that examines the social implications of artificial intelligence.
Berkman Klein Center for Internet & Society
23 Everett St., #2, Cambridge, MA 02138
Research center at Harvard University that serves, along with the MIT Media Lab, as an anchor institution of the Ethics and Governance of Artificial Intelligence Fund, a $27 million fund created in 2017 to advance AI research for the public interest.
Center for Human-Compatible Artificial Intelligence
University of California, Berkeley, CA 94720-1234
Research center whose mission is creating beneficial AI systems by incorporating elements from the social sciences.
Institute of Electrical and Electronics Engineers
3 Park Ave., 17th Floor, New York, NY 10016-5997
Professional organization that developed an ethics initiative for autonomous and intelligent systems.
MIT Media Lab
77 Massachusetts Ave., E14/E15, Cambridge, MA 02139-4307
Research laboratory at the Massachusetts Institute of Technology.
Partnership on AI to Benefit People and Society
215 2nd St., Suite 200, San Francisco, CA 94105
Nonprofit technology industry consortium established to formulate best practices on AI technologies.
White House Office of Science and Technology Policy
1650 Pennsylvania Ave., Washington, DC 20504
Federal office established by Congress in 1976 to advise the president and others within the Executive Office of the President on science and technology.