Discussing The Intangible
Parker Hosts A Discussion About Ethics in A.I.
With the increasing prevalence of artificial intelligence in our day to day lives, attention has turned to its ethics.
On Thursday, November 9, Parker hosted a faculty discussion about ethics in artificial intelligence with visiting scientist Rick Stevens, a professor at the University of Chicago, who studies computing, environment, and life sciences, along with Upper School science teacher Xiao Zhang, Upper School history teacher Susan Elliott, Upper School Librarian Annette Lesak, and STEM and coding teacher Adam Colestock.
“I think the purpose is to help prepare teachers for the visiting scientist across content areas,” Lesak said, “and perhaps it’s a little bit of outreach from the science department to their colleagues to show that this particular issue of artificial intelligence reaches across content areas.”
On September 29, Stevens gave an MX for the school, which focused on artificial intelligence.
The faculty discussion was a success, according to Elliott. “It was really good,” Elliott said. “I couldn’t believe an hour went by like that.” One of the topics that came up was the ethics of self-driving cars. Another topic: how computers are making decisions that impact our lives, without the knowing consent of users.
According to an MIT Technology Review, John Giannandrea —Senior Vice President for Search at Google Inc — has argued that the real safety question is biased data. If we give systems biased data, they will be biased, which will be concerning as technology spreads to critical areas like medicine and law.
Algorithmic bias is common in many industries, and no means of correction have been applied. Society is blindly trusting these machines to do as good of a job as humans, and they are simply being machines with no emotion to their actions.
“If it is considered as an intelligent being, then we have a responsibility to treat them ethically,” Zhang said. “This is a discussed issue — in order for artificial intelligence to be ethically aware, we should treat them with the same ethics we want them to treat us. It’s essentially the golden rule, only for robots.”
Elliott considered how one defines artificial intelligence. “If you’re taking, say, admissions information,” Elliott said, “and you’re computerizing it, and then you’re using the computer to organize that information, could you say that database is a form of artificial intelligence because is there any point where it is learning or prioritizing on its own?”
Identifying instances of artificial intelligence can be difficult, according to Lesak. Lesak said, “There’s probably ways artificial intelligence is present in our school that I’m not even aware of.”