Reid Simmons, a research professor at Carnegie Mellon University, just finished another season of commencements. Over a career spanning more than 30 years, Simmons has seen plenty of students reach that milestone, but the newest crop of graduates is special because they’re like penguins, he said.
The late Randy Pausch used the analogy in describing flightless birds which “waddle off to the edge of the ice and they see the water, and they don't know what's in there,” Simmons recounted. “There could be a lot of fish or there could be predators, and it takes one adventuresome penguin to jump in first.”
The students who just graduated — particularly those who majored in artificial intelligence — are like the penguins because when they chose their course of study, AI was a brand-new major, said Simmons, director of CMU’s AI major. Four years ago there was some “uncertainty” about whether these 39 students would get particular jobs or coveted slots in graduate schools, but thankfully it worked out, Simmons continued. “They're the pioneers, and there's a lot to congratulate them for.”
The research professor and Jewish blogger spoke with the Chronicle shortly after commencement but didn’t spend much time looking back. Instead, he described the future — both as it pertains to AI and the students who study the field.
Thanks to movies like “The Terminator,” AI has long captivated human interest. Weeks ago, the subject got new attention when a Google engineer told The Washington Post that the company’s AI is sentient.
Media reports and Hollywood blockbusters spur interest in the field but don’t do a good job explaining the future of technology, Simmons said. These stories make great television or movies, but they’re “really very far from reality.”
As opposed to focusing on robots taking over the world, the bigger concern is when “people using AI technologies do bad things to other people,” Simmons said. Any technology can be used for good or bad; what’s critical, though, is that people “understand the difference and are not complicit in developing a technology that can be used for bad purposes.”
For Simmons, a Squirrel Hill resident who maintains a kosher cooking blog, Judaism is a helpful reminder in how to address certain matters related to AI.
“One of the main issues in terms of AI is that if you feed it biased data it produces biased results, and it could be discriminatory results,” he said. “Judaism teaches a respect and love for all people, and I think that this is a very important thing that we need to be aware of — that the technologies that we're developing are not just going to be used for educated people who are developing the technologies, but they need to be used, and not discriminatorily, for all people, respecting their autonomy, respecting their privacy.”
There are instances in which bad actors take advantage of technological advances, but there are also times when people don’t give enough attention to the products and materials being developed. For example, Simmons said, early on in the development of face-detection technology there were difficulties recognizing African American faces.
One of the main reasons why, according to Simmons, was because the majority of the training was performed on “mostly white faces.”
This goes back to the idea that if one feeds AI biased data then biased results will be produced, he explained.
When early face-detection software failed to recognize African American faces it wasn’t “an evil plot to discriminate against Blacks,” Simmons said. “It was a lack of understanding about the diversity of training data that was needed in order to get the nondiscriminatory result.”
Simmons hopes future collaborations between ethicists and engineers yield better outcomes and pointed to a project supported by the National Science Foundation studying how AI can help older adults and their caregivers.
The project, which is headed by Georgia Tech, is a five-year endeavor looking at “how we can help people particularly with mild cognitive impairment live independently — in their own homes — by providing guidance by detecting changes in their behavior,” he said.
The hope is that CMU and other participating universities can develop fundamental technologies and “commercializable products” that help determine when a physician or caregiver may be needed, Simmons said.
As the project unfolds, engineers and ethicists are working together to understand some of the underlying issues. Because the goal, Simmons said, is that the technologies are designed so people will be able to “use them and use them in the correct ways.”
Whether it’s helping people with cognitive impairment remain in their homes or autonomously drive patients to their doctor, AI has the potential to “be a tremendous benefit to people,” Simmons said. “This is something that I think that we should embrace because it's going to radically change our lives for the better.”
And yes, there’s a lot of fear out there about what AI is capable of, but this isn’t “something people should be concerned about. The important thing is to make sure that the engineers who are developing and deploying this technology understand the ethical issues that underlie the technology,” Simmons said. “If they do, I think that there's a tremendous amount of good that this technology can bring people.” PJC
Adam Reinherz can be reached at firstname.lastname@example.org.