News Ticker

Experts Share Views on Artificial Intelligence and Its Future with Full House of Faculty

Discussion reveals the state of AI, its challenges and its prospects for improving lives

From right are Hugh Seaton, Emily Reid, Lin Zhou, Kathy McGroddy-Goetz, Jerry Smith and Rupendra Paliwal.

As far as we know, there were no humanoids amid the full house of educators who gathered October 9  at Sacred Heart University for a discussion on “Artificial Intelligence—Applications and Implications.”

Rupendra Paliwal, SHU’s provost and vice president for Academic Affairs, and Cenk Erdil, assistant professor of computer science and engineering, moderated the session. It featured five leading experts from the AI field: Jerry Smith, vice president of data sciences and AI strategy at Cognizant; Kathy McGroddy-Goetz, head of strategic partnerships at Medidata Solutions; Lin Zhou, master inventor and program director of Education Innovation at IBM Watson Education; Emily Reid, principal of E.E. Reid Consulting, LLC; and Hugh Seaton, CEO, Aquinas Learning, Inc.

University President John Petillo offered his welcome, in particular calling out a 20-member faculty team from SHU’s Luxembourg satellite campus. Petillo said staying ahead of the competition, keeping pace with students, working AI into the curriculum and exploring associated opportunities are important for Sacred Heart University.

Paliwal followed, noting that SHU’s administrators have been talking for the past two years about continuing to build on SHU’s excellence in teaching and learning. Given how AI has come to affect all aspects of our lives and has potential to alter the way people work and interact, “We want to be one of the pioneers and think about how our content, curriculum and teaching reflect it,” he said. He referenced SHU’s new West Campus (former GE headquarters) and its new AI and cyber security labs, and said that to optimally use these elements school needs to build awareness and expertise in these areas.

After introducing each of the panelists, Paliwal asked their definitions of AI.

“It’s not much different than human intelligence, just computer-based. If a system is capable of doing it, it’s AI,” said Smith. Goetz described it as data with different kinds of mathematical action, intended to augment the intelligence of the people using it. Zhou approached the question by noting three pillars of today’s economy: GPS, network and CPU. He said tech industry is building fantastic platforms in these areas and generating tremendous amount of data. As humans fall behind in curating data, AI has come in to meet the demand by transforming data into knowledge. Reid pointed to AI’s present shortcomings—that scientists can spend years creating commands to make a machine perform a function—and she said AI will be achieved when a human being can’t distinguish the difference between AI and humans (or humanoids in an audience). Seaton added that AI’s capabilities are still incredibly narrow, but they’re moving quickly and will be very different in just a year. “AI is not really that intelligent yet; the human mind is more complex,” he said.

Sitting at the opposite end of the lineup of panelists, co-moderator Erdil returned serve—like a throwback game of PONG—and asked if there were any misconceptions about AI. Seaton reiterated that AI tech is still narrow and cited common problems with using Siri. He also suggested the idea that jobs will disappear is premature. “Real adoption of real technology is slow and painful, e.g. driverless cars. Making products that real people can’t break is hard,” he said.

Reid expressed concerns about the data we’re feeding into machines, the presence of algorithmic biases and data that is tainted because it reflects our problems and issues, and machines will perpetuate that. “Education is important as an opportunity to make AI better at the information source,” she noted.

Zhou generalized that, contrary to belief, the public is already familiar with AI and, at some point, we have all used it. Goetz said AI has challenges in the health sector: for example, explaining a diagnosis or treatment plan prescribed by AI is difficult. “It’s in its early days; it will take time to adopt,” she said. Smith took a broader look, cautioning that AI lacks transparency and trust and is unable to understand ethics. “We must get these down or there will be a third nuclear winter,” he offered.

Rupendra queried the group about AI’s relation to social justice, ethics and income inequality, topics near and dear to the SHU community. Smith spoke to bias, noting three kinds: AI created through data, AI created through humans and AI creating AI. “Everyone puts biases in data,” he said, and systems must grow organically. Goetz concurred, noting explainability and avoiding biases should be the focus. Zhou also echoed the need to avoid bias and said the best approach for students is to be better informed about AI. Reid checked in with the priorities of privacy, bias and how AI solutions are applied. Seaton asked what constitutes bias and pointed to the need to build framework to understand first how our minds work. “We need people to ask fundamental questions in a new and critical way. Our brains do a lot of things automatically—we wonder why machines can’t do that,” he said.

An audience member asked if AI can uncover bias. Reid said IBM is doing a good job balancing data sets and working on technical ways to resolve it. However, she said, “It will always be a catch-up game, as bias will always be part of our society.” Smith said his organization tracks more than 150 biases, and we must apply digital psychology to the process. Reid said even the most well-intentioned groups are going to create things that don’t serve everyone. Seaton remarked AI is sneaky—it may create more problems while trying to solve an initial problem.

Another audience member asked how AI benefits us right now, suggesting humans need to focus on nourishing our souls. Reid said AI does things for humans that are monotonous and mechanized, freeing us to do things that are more deeply human. Seaton said AI is good in construction/design, giving us more choices to consider. Smith said AI creates poetry that is used to understand how people’s minds work. “If we can teach AI to dream and express themselves, psychologists can interpret,” he said.

Nearing the end of the session, an audience member asked if everyday AI use changes us. Zhou said he has been teaching for the last 10 years and has tried to use AI as much as necessary. By assigning routine and “mechanical” tasks to AI, he has free time to interact with students, which is more valuable and what humans are much better at doing than AI.

Finally, Seaton noted that humans add emotion to decision-making processes, with our brains taking in data and reacting to it. Now, we are asking deep-learning systems to choose between what they like or don’t like. “This is a very important thing. Will AI get to a point where it can make decisions like us?” he asked.