HW1: AI Considered Dangerous?

out 2/1, due 2/8

Advances in Artificial Intelligence can improve our lives in many ways, but it is possible that they could make things worse for some or even all people. Concerns about potential negative effects of technology are not new. The Luddites in the early 19th century are a famous example. Many plays, books and films in the past 100 years are built around the theme that intelligent robots are a great danger. Examples include the roboti in Karl Capek's play R.U.R., Maria in Fritz Lang's film Metropolis, the Nexus-6 model androids in P.K. Dick's novel, Do Androids Dream of Electric Sheep and the time-traveling Terminators.

Recent advances in computing and AI have led some technologists to be concerned that it will lead to the creation of superintelligences whose cognitive abilities will surpass humans' in almost all areas and that this could pose an existential risk for humanity. The idea is sometimes tied to an anticipated technical singularity, which Wikipedia describes as

"The technological singularity is a hypothetical event in which artificial general intelligence (constituting, for example, intelligent computers, computer networks, or robots) would be capable of recursive self-improvement (progressively redesigning itself), or of autonomously building ever smarter and more powerful machines than itself, up to the point of a runaway effect—an intelligence explosion—that yields an intelligence surpassing all current human control or understanding. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence."

While many high profile figures including Elon Musk, Bill Gates and Stephen Hawkings have expressed concern, most A.I. researchers are not very worried.

Background reading

  • Start by reading a recent story in the Washington Post, THE A.I. ANXIETY, which provides a good overview on the current controversy.
  • Watch the short TED talk (What happens when our computers get smarter than we are?) by Philosopher Nick Bostrom, whose recent book and talks on superintelligences has gotten a lot of press.

What to do

Pretend that you are writing an opinion piece for The Retriever on the topic and explaining why we should or should not be worried about the dangers of a superintelligence emerging in your lifetime. Your piece should be at least 500 words long and be written so it could be understood by the Retriever's audience. It should explain the issue and why many people are concerned about it today, mention arguments on both sides of the controversy, present and argue for your own opinion. Oh, and think up a catchy headline.

The assignment is due before 11:59:59 on Monday, 8 February.

Summarize your piece in a tweet-length (i.e., at most 140 characters) comment to the Pizaa thread HW1tweets.

Clone your 471 HW1 git repository, add a pdf file with your piece to it, commit the file and push the changes back to the github repository.

We're run into some problems with our plan to use GitHub. For HW1 and maybe even HW2 we will use trusty old submit on gl. The class is cs471_01 and the directory is set up. Please submit a single pdf file for hw1 with a name like finin_hw1.pdf where 'finin' is replaced by your umbc gl userid. Your document should start with a title and include your full name and email address. Here is an example.

Most of you have used the submit system on gl before. If you have not, you can view some help here. The steps are to (1) upload your pdf file to the gl sustem using an ftp command or client; (2) log into gl and then add the file to your cs471_01 submit directory and (3) check to ensure it's there! Here's a script capture that shows how I uploaded s file and submitted it to cs471_01.