HW1: AI Considered Dangerous?

out 2/1, due 2/8

Advances in Artificial Intelligence can improve our lives in many ways, but it is possible that they could make things worse for some or even all people. Concerns about potential negative effects of technology are not new. The Luddites in the early 19th century are a famous example. Many plays, books and films in the past 100 years are built around the theme that intelligent robots are a great danger. Examples include the roboti in Karl Capek's play R.U.R., Maria in Fritz Lang's film Metropolis, the Nexus-6 model androids in P.K. Dick's novel, Do Androids Dream of Electric Sheep and the time-traveling Terminators.

Recent advances in computing and AI have led some technologists to be concerned that it will lead to the creation of superintelligences whose cognitive abilities will surpass humans' in almost all areas and that this could pose an existential risk for humanity. The idea is sometimes tied to an anticipated technical singularity, which Wikipedia describes as

"The technological singularity is a hypothetical event in which artificial general intelligence (constituting, for example, intelligent computers, computer networks, or robots) would be capable of recursive self-improvement (progressively redesigning itself), or of autonomously building ever smarter and more powerful machines than itself, up to the point of a runaway effect—an intelligence explosion—that yields an intelligence surpassing all current human control or understanding. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence."

While many high profile figures including Elon Musk, Bill Gates and Stephen Hawkings have expressed concern, most A.I. researchers are not very worried.

Background reading

  • Watch the short TED talk (What happens when our computers get smarter than we are?) by Philosopher Nick Bostrom, whose recent book and talks on superintelligences has gotten a lot of press.

What to do

Pretend that you are writing an opinion piece for The Retriever on the topic and explaining why we should or should not be worried about the dangers of a superintelligence emerging in your lifetime. Your piece should be at least 500 words long and be written so it could be understood by the Retriever's audience. It should explain the issue and why many people are concerned about it today, mention arguments on both sides of the controversy, present and argue for your own opinion. Oh, and think up a catchy headline.

The assignment is due before 11:59:59 on Monday, 8 February.

Summarize your piece in a tweet-length (i.e., at most 140 characters) comment to the Pizaa thread HW1tweets.

Clone your 471 HW1 git repository, add a pdf file with your piece to it, add and commit the file and push the changes back to the github repository. See here for more detailed instructions.