Read the full text of Ryan’s paper here: How Anthropomorphism Distorts the Concept of AI

From Alexa in a speaker to Roomba under the couch, almost every Silicon Valley student is familiar with some form of affectionately nicknamed artificial intelligence (AI) in their homes.

One local computer science a cionado aims to make an im- pact by perpetuating an alternative perspective on AI. Paly senior Ryan Liu, who took a course on the subject under Stanford Prof. Jerry Kaplan last winter, wrote a paper on the anthropomorphization of AI, or ascribing human qualities to nonhuman objects. Liu gives the example of autocorrect adding errors to our texts; many react with frustration, expecting their smartphones to be “smarter.” Yet our devices rely on preprogrammed data, rather than human intuition honed by millions of years of evolution.

“We unconsciously place unreasonable expectations on our technology because of anthropomorphization,” Liu writes in his paper, “How Anthropomorphism Distorts the Concept of Artificial Intelligence.”

In fact, technology lacks (and will likely never have) the capacity to act fully autonomously, according to Liu.

“Machine learning [a form of AI] is goal-defined,” Liu says. “Since it’s goal-defined, machines can’t ‘turn evil’ unless the goals themselves are evil — and the goals are assigned by humans.”

Though the tendency to anthropomorphize may seem innocuous, Liu warns of the impediments it poses to technological progress.

“Anthropomorphization is painting a false image of AI as an enemy of humans, which creates a barrier of knowledge between those who understand AI and the general population,” Liu writes. “You really don’t want the public image of AI to be negative and weaponlike. … It is very important is for AI not to become a weapon used against other countries.”

Given the misrepresentation of AI in popular culture, and lack of alternative sources of information, Liu advocates for a two-pronged solution to the problem he identi es in his paper.

“Education is the best solution, which is why I wrote this piece,” Liu says. “It’s very important to get the message across to most people because every person’s mind really counts toward this future. I would [also] encourage others [who are AI literate] to make everyone AI literate.”

While individuals can change themselves and others around them, Liu also calls for a longer-term cultural shift, which countries like Japan have already accomplished.

“You could also change mass media,” Liu says. “Japan does this really well. Instead of terminators, they have robots that are helpers or saviors of society. Think characters like Astro Boy, which saves the world, or Doraemon, the robotic flying cat with the propeller on its head.”

Not only is this mindset shift critical for students hoping to develop their understandings of AI and technology, but Liu also believes students will benefit from critiquing popular represen-tations of AI.

“It is very important to be AI-literate if you intend to focus on technology and STEM,” Liu says. “Otherwise, students should look to recognize the difference between what sells in mass media and what is plausible in real life.”