March 18, 2019

WANG | On Yesterday, Song-Writing and Artificial Intelligence

Print More

By now, you’ve probably heard about the upcoming film Yesterday. It follows a struggling musician who gets the break of a lifetime when he’s rudely waylaid by a truck, and he awakens to a world suddenly forgetful of The Beatles. Through sheer bashfulness and chutzpah, he starts to “write” hit song after hit song from the Beatles catalog for a girl he’s after.

We can guess where the movies goes: He gets the girl, writes the hit song, rides off into the sunset. The whole movie is a sundae in cinematic form: Sweet and reliable with a pleasant aftertaste.

The movie’s song “Yesterday” is also inspired from the actual song, which was written and performed solo by Paul McCartney and was born when McCartney heard the melody in his dreams one day, rolled out of his bed and played it on his piano. So sure was he that the melody was from somewhere else, he searched endlessly for its prior existence. After finding nothing, he was resigned to the fact that he had simply conjured one of the most iconic melodies in modern music from nothing more than a dream.

In 1949, another Englishman would have crowed about the achievements of his fellow countryman. Jefferson Lister, a renowned neurosurgeon at the time, and scathing Artificial intelligence skeptic, wrote feverishly, “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain — that is, not only write it but know that it had written it.”

What he argued was that machines would never think organically like humans. A machine could  never conjure a song from something as lively as dreams or emotions. It could never rise to the intuition of a human, the vast (or in some cases, narrow) emotional range of a person. It might appear intelligent from the outside, but chuck the hardware away and something much less impressive sits underneath — something far inferior to human emotion.

Lister has yet to be proven wrong, but the debate to whether artificial intelligence can think has become more and more contentious. Chatbots, which run on word recognition algorithms and database queries deliver set messages to customers. Autofill search bars guide us to the most popular searchesSmart replies in Gmail nudge us to the appropriate responses we don’t know how to word.  But are these works really what we consider thinking.

The Loebner Prize is a good example of what’s poisoned AI. At the annual competition, chatbots are tested by having them converse with human judges to uphold the Turing Test Principle that “if a machine can fool another human that it too is human, it must have human intelligence,”. The goal for the programs is simple then: Convince the judges they’re human, and they win.

Given the subtleties of human conversation and the vast knowledge required to hold a regular human conversation, it’s no small feat for the chatbots to fool any one human judge. And just last year, a chatbot fooled one-third of the Loebner judges it “talked to.” But this doesn’t prove whether a machine can think. It only proves that, in a showcase environment, it has the ability to mimic humans, to unfailingly pull information from the web rather than with the use of complex algorithms. It’s not intelligence, however, and it’s certainly not the madcap inspiration we want.

The problem with this form of AI in the long run is the common sense humans have that it must be taught. Certain knowledge we don’t dwell on conjuring for too long — knowledge like, “it’s cold outside, wear a jacket,” “there’s a car coming, you can avoid it by taking walking to another lane or stopping,” — must be force fed to the AI. For the computer to live casually, it has to quickly pick up on a million facts that are common knowledge to us. And more often than not, it fails to do so.

To have true AI requires further development in a subfield dubbed “Strong AI.” Instead of giving the information all at once to the computer, under Strong AI, the computer would be given sensory perceptions akin to humans to connect the mind and body of the machine so it can pick up facts more slowly, learn like a child and, essentially, develop as a human.

The first challenge of this is that developing strong artificial intelligence is grossly difficult, and second, no one is quite sure how to even define intelligence in the first place. If the first problem has left us lost, the second has left us blind.

But there’s another thing. A friend of mine who is studying religiously hard to become a doctor mentioned that even if AI became intelligent enough to supplant health professionals, there’s a missing emotional aspect the machine will never achieve. To connect with patients requires a human’s understanding, not a doctor’s knowledge. To give a diagnosis is an art itself — to be aware and connected with people enough to know what to say, when to say, and how to say it

We’re stuck in the medieval ages of true artificial intelligence. If we can’t really define what intelligence is, how do we know when we’ve achieved it? Do we define intelligence by SAT Scores, IQ scores or some other arbitrary number? Or do we define it by the creativity by the likes of McCartney, who is able to pull a song out of seemingly nothing? To create a machine that can fool us can take maybe a few dreamless nights. But to create a machine that can fool itself — convince itself it’s wrong, to learn from its mistakes and to weigh the risks and rewards of reaching out to another — may just be a dream of itself.

William Wang is a junior in the College of Agriculture and Life Sciences. He can be reached at [email protected]. Willpower runs every other Tuesday this semester.