Do Ethics Apply to AI?

Elisabeth Tavarez

Assistant Professor of Computer Science Pablo Rivas says yes, and he is on the cutting edge of establishing ethical guidelines in the application of artificial intelligence and machine learning.


September 30, 2019—When most people imagine a trip to Italy, they think of visits to museums and monuments, delicious local cuisine, and world-class shopping. And while Assistant Professor of Computer Science Pablo Rivas did his fair share of sightseeing, the main purpose of his trip over the summer was to investigate the ethical responsibilities that artificial intelligence (AI) carries throughout the world and, specifically, why Italy is such an advocate for the responsible and ethical practice of AI.

AI is defined by Merriam-Webster as “a branch of computer science dealing with the simulation of intelligent behavior in computers.” Machine learning, a subset of AI, teaches computers how to learn through algorithms. It’s a data-driven field that has been assumed to be completely objective, but in recent years, the number of papers on fairness in AI has grown exponentially as people begin to ask questions. It’s a hot topic. As Rivas explains, “We started hearing about algorithms that were incorrectly predicting, for example, who is at high risk of committing a crime. In one study, African Americans were being misidentified at twice the rate as their white counterparts, so it raises fundamental questions of fairness and bias.” Rivas notes that bias in machine learning can have very real implications, such as bias directly affecting minority groups, which could lead to certain minorities being arrested at higher rates, or face and voice recognition systems that are unable to perform well for minority groups; so it’s important to examine the algorithm’s underlying data used and the assumptions made about the data prior to teaching the computer to learn from the data. 

Similar to the United States, Italy lies at the border of the “Global South,” or developing world, and there is excellent work being done on ethics in AI that can be a model for other regions. Italian scientists are one of the largest cohorts in the European Union (EU) with respect to the creation of guidelines for trustworthy AI; they work with an EU-sponsored group that proposed ethical guidelines in AI research, especially data collection and management. Rivas interviewed many of these experts and other scientists about their ethics culture, including Jose Lorenzana and Claudio Castellano, renowned scientists at the Consiglio Nazionale delle Ricerche, Michelangelo Puliga from LinkaLab, and Salvatore Ruggieri of the University of Pisa. Says Rivas, “We discussed how they view the impact of AI technology in their work and in their lives outside the laboratory or workplace, as well as what concerns them about AI. It was eye-opening.” Rivas also had the opportunity to present his research at the annual meeting of the Association for Computational Linguistics, which took place in Florence.

In the US, efforts to develop ethical guidelines for AI are overseen by the Institute of Electrical and Electronics Engineers (IEEE), which formed international scientific groups to write normative ethical standards. Rivas was invited to serve on two IEEE committees, the Algorithmic Bias Working Group and the Empathic Technology Working Group. The standards established by the IEEE have global impact, touching on everything from WiFi communication to self-driving cars to HDTVs. 

While the US and Europe already have ethical standards for AI, the developing world does not have the resources to convene a team of experts to examine the subject. Why does this matter? Rivas notes that “the algorithms we create have the potential to promote positive change in society, but they can also perpetuate bias. AI learns from data, which mirrors society. For example, African Americans are incarcerated at a higher rate than whites, which reflects societal bias. AI has the potential to perpetuate that bias. In my view, we must build AI that reflects the society we want to become, and not necessarily the one we have now.” Rivas adds that it’s also important to protect people who don’t have access to technology, but are nonetheless affected by it. For instance, facial and voice recognition technology might not recognize everyone equally well, so it’s crucial to be aware of the potential for unfair outcomes.

Roger Norton, Dean of the School of Computer Science and Mathematics, agrees that these are important questions to be asking and that Rivas is the right person to do it. Says Norton, “As both an ordained minister and a computer scientist, Pablo is able to bring a unique perspective to this important and difficult topic. Consider a self-driving vehicle faced with a child running into the road after a ball, with its only alternative to avoiding the child being to turn right – into an elderly man walking down the sidewalk. What is the ethical response of the algorithm?”

At Marist, Rivas is incorporating this focus on ethics into his teaching and research. Along with Sabrina Bergsten ’20, an information technology and systems major, he recently authored a paper entitled "Societal Benefits and Risks of Artificial Intelligence: A Succinct Survey" for the International Conference on Artificial Intelligence. Rivas also teaches a course at Marist on “Technology, Ethics, and Society,” which has students from a variety of academic majors. “The mix of students makes for really interesting conversations, where the students from other majors get the computer science students to think about moral theory and applied ethics.”  Rivas has also been writing a book on ethics in technology, and the feedback he gets from his students has informed his thinking. He recently signed a contract for another book on deep learning, the first to teach AI with an ethical component. With all of his activities, Rivas is doing his part to ensure that ethics in AI will continue to be discussed, both by practitioners and by students in the classroom.

Asset Publisher