News
HMS Is Facing a Deficit. Under Trump, Some Fear It May Get Worse.
News
Cambridge Police Respond to Three Armed Robberies Over Holiday Weekend
News
What’s Next for Harvard’s Legacy of Slavery Initiative?
News
MassDOT Adds Unpopular Train Layover to Allston I-90 Project in Sudden Reversal
News
Denied Winter Campus Housing, International Students Scramble to Find Alternative Options
When Deep Blue beat chess master Gary Kasparov in its 1997 rematch, the news was greeted without too much alarm. After all, chess was just a game, like checkers or tic-tac-toe. If a computer could memorize enough mechanical moves to play, that didn't mean it was smart; it was just good at plugging numbers. Computers can only do what they've been programmed to do, the conventional wisdom said; true originality, the capacity to look at a unique situation and determine an appropriate response, required more than circuits.
Of course, the creative, intelligent computer has done far better in capturing the public imagination than have its unexciting number-crunching counterparts. A world in which computers were as creative as humans would seem to leave the poor carbon-based creatures little room to excel, especially if their silicon rivals continued to increase in speed and capacity for processing information at an exponential rate. What would the humans do in a world where their machines outsmart them?
So far, there's been little worry of wily computers inventing ways to outsmart (and replace) their owners. The annoying little paperclip in Microsoft Word is no great testament to the progress of artificial intelligence. But the claim that computers can't create has been challenged by two recent experiments, in which the output of computer programs--rigid algorithms with little room for intellectual freedom--was judged to be indistinguishable (or even better!) than the attempts of unconstrained human imaginations. In other words, originality may be a little more unoriginal than we thought.
Brutus.1, a system developed at Rensselaer Polytechnic Institute, was designed to write short stories on the subject of betrayal--hence the treacherous name. To teach the innocent computer its sinful ways, the computer scientists who designed it set about "mathematizing the concept of betrayal through a series of algorithms and data structures." Most people might assume that betrayal is not easily mathematized, a subject for human emotion and not for symbol manipulators, but the computer seems to have picked up the vice rather well. (A Rensselaer press release states that the programmers also taught Brutus.1 something of deception, evil, "and to some extent voyeurism"-- a project giving new meaning to the phrase "your data is corrupted.")
The program's first results have been mixed. This fall, in an online contest at www.instantnovelist.com, Brutus.1 competed against four humans who wrote short stories on the same topic. The computer's entry, "Self-Betrayal," was unremarkable; its first sentence, "Dave Striver loved the university--at least most of the time," seems to fail the "Call me Ishmael" test, and the protagonist's name is clumsily allegorical. In fact, the story came in last in a poll of visitors to the site; the literary field is for now safe from a deluge of machine-produced prose.
More interestingly, however, only 25 percent of the visitors correctly identified the entry as computer-written --a figure barely above that predicted by random chance. The computer's work was, to many, indistinguishable from that of the human authors. The standard method of judging whether a computer is conscious or not is whether it "acts conscious"--whether an observer would be unable to tell that its output came from a computer and not another human. Brutus.1 has by no means become a thinking writer, but if its product looks human to readers, it has somehow made up for whatever capacity of ingenuity it may lack.
A more positive result was generated by researchers at the Hebrew University of Jerusalem, who decided to challenge an industry defined by creativity--advertising. The researchers noticed that a large minority of prize-winning ads followed a simple formula: find the product's characteristic that is its selling point, and use images to emphasize that characteristic in the ad. A similar contest found that the ads the researchers' computers produced were generally as good as those of professional ad agencies, while they far surpassed the efforts of human amateurs. For instance, while a human suggested a picture of the walls of the Old City to promote a tennis tournament in Jerusalem, the computer proposed a domed mosque with the dome in the shape of a tennis ball.
Of course, the program also had its limitations. According to Reuters, an assignment to create ads for a classy brand of cat food resulted in images of the cat food in a formal ball room and of cat food teamed with Count Dracula. Do either of these examples describe true creativity? If creativity means creation in a vacuum, then no. However, very little human creativity may come out of a vacuum. The most powerful literature does not always deal with subjects never considered before, but often presents common experiences in a slightly different light. If a formula can make you look creative, maybe originality is more formulaic than the Romantic ideal of the inspired genius would imply--maybe the interactions of neurons and of transistors aren't that far apart. Literary creativity may be just the first in a number of skills future computers will acquire that are now thought to be inaccessible. In the end, answering this question could turn out to be the first achievement of computers in another field: philosophy.
Stephen E. Sachs '02, a Crimson editor, is a history concentrator in Quincy House.
Want to keep up with breaking news? Subscribe to our email newsletter.