UMBC's Dean Julia Ross on Everyday AI

UMBC’s Julia Ross, Dean of the College of Engineering and Information Technology, and IBM Watson General Manager David Kenny discussed Everyday AI — new and developing technologies in Artificial Intelligence that are leaving the lab and entering the consumer market at the Washington Post’s Transformers live journalism event on 18 May 2016.

talk: Cognitive Computing & Visualization at IBM Research/RPI, 10am Thur 5/19, UMBC

news_viz

Cognitive Computing and Visualization at IBM Research/RPI CISL

Dr. Hui Su, IBM Research

10:00-11:00am, Thursday, 19 May 2016, ITE 325b

Dr. Hui Su will talk about Cognitive and Immersive Systems Lab, a research initiative to develop the new frontier of immersive cognitive systems that explore and advance natural, collaborative problem-solving among groups of humans and machines. This lab is a collaboration between IBM Research and Rensselaer Polytechnic Institute. Dr. Su will talk about why the research for human computer interaction is extended to build a symbiotic relationship between human beings and smart machines, what research is going to be important to build immersive cognitive systems in order to transform the way professionals work in the future.

Dr. Hui Su is the Director of Cognitive and Immersive Systems Lab, a collaboration between IBM Research and Rensselaer Polytechnic Institute. He has been a technical leader and an executive at IBM Research. Most recently, he was the Director of IBM Cambridge Research Lab in Cambridge, MA and was responsible for a broad scope of global missions in IBM Research, including Cognitive User Experience, Center for Innovation in Visual Analytics and Center for Social Business. As a technical leader and a researcher for 19 years at IBM Research, Dr. Su has been an expert in multiple areas ranging from Human Computer Interaction, Cloud Computing, Visual Analytics, and Neural Network Algorithms for Image Recognition etc. As an executive, he has been leading research labs and research teams in the US and China. He is passionate about game-changing ideas and fundamental research, passionate in speeding up the impact generation process for technical innovations, discovering and developing new linkages between innovative research work and business needs.

Host: Jian Chen ()

talk: Learning models of language, action and perception for human-robot collaboration

Learning models of language, action and perception
for human-robot collaboration

Dr. Stefanie Tellex
Department of Computer Science, Brown University

4:00pm Monday, 7 March 2016, ITE325b

Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets.  To achieve complex tasks, it is essential for robots to move beyond merely interacting with people and toward collaboration, so that one person can easily and flexibly work with many autonomous robots.  The aim of my research program is to create autonomous robots that collaborate with people to meet their needs by learning decision-theoretic models for communication, action, and perception.  Communication for collaboration requires models of language that map between sentences and aspects of the external world. My work enables a robot to learn compositional models for word meanings that allow a robot to explicitly reason and communicate about its own uncertainty, increasing the speed and accuracy of human-robot communication.  Action for collaboration requires models that match how people think and talk, because people communicate about all aspects of a robot’s behavior, from low-level motion preferences (e.g., “Please fly up a few feet”) to high-level requests (e.g., “Please inspect the building”).  I am creating new methods for learning how to plan in very large, uncertain state-action spaces by using hierarchical abstraction.  Perception for collaboration requires the robot to detect, localize, and manipulate the objects in its environment that are most important to its human collaborator.  I am creating new methods for autonomously acquiring perceptual models in situ so the robot can perceive the objects most relevant to the human’s goals. My unified decision-theoretic framework supports data-driven training and robust, feedback-driven human-robot collaboration.

Stefanie Tellex is an Assistant Professor of Computer Science and Assistant Professor of Engineering at Brown University.  Her group, the Humans To Robots Lab, creates robots that seamlessly collaborate with people to meet their needs using language, gesture, and probabilistic inference, aiming to empower every person with a collaborative robot.  She completed her Ph.D. at the MIT Media Lab in 2010, where she developed models for the meanings of spatial prepositions and motion verbs.  Her postdoctoral work at MIT CSAIL focused on creating robots that understand natural language.  She has published at SIGIR, HRI, RSS, AAAI, IROS, ICAPs and ICMI, winning Best Student Paper at SIGIR and ICMI, Best Paper at RSS, and an award from the CCC Blue Sky Ideas Initiative.  Her awards include being named one of IEEE Spectrum’s AI’s 10 to Watch in 2013, the Richard B. Salomon Faculty Research Award at Brown University, a DARPA Young Faculty Award in 2015, and a 2016 Sloan Research Fellowship.  Her work has been featured in the press on National Public Radio and MIT Technology Review; she was named one of Wired UK’s Women Who Changed Science In 2015 and listed as one of MIT Technology Review’s Ten Breakthrough Technologies in 2016.

Prof. Marie desJardins: one of ten AI researchers to follow on Twitter

TechRepublic identified CSEE professor Marie desJardins as one of “10 artificial intelligence researchers to follow on Twitter”. Check out her feed at @mariedj17.

“Want to know what’s happening at the epicenter of artificial intelligence? Follow these 10 AI researchers who make the most of their 140 characters on Twitter.”

Alexa, get my coffee: Using the Amazon Echo in Research

“Alexa, get my coffee”:
Using the Amazon Echo in Research

Megan Zimmerman

10:30am Monday, 7 December 2015, ITE 346

The Amazon Echo is a remarkable example of language-controlled, user-centric technology, but also a great example of how far such devices have to go before they will fulfill the longstanding promise of intelligent assistance. In this talk, we will describe the Interactive Robotics and Language Lab‘s work with the Echo, with an emphasis on the practical aspects of getting it set up for development and adding new capabilities. We will demonstrate adding a simple new interaction, and then lead a brainstorming session on future research applications.

Megan Zimmerman is a UMBC undergrad majoring in computer science working on interpreting language about tasks at varying levels of abstraction, with a focus on interpreting abstract statements as possible task instructions in assistive technology.

talk: Grounded Language Acquisition: A Physical Agent Approach, Fri 10/9

The UMBC CSEE Seminar Series Presents

Grounded Language Acquisition: A Physical Agent Approach

Dr. Cynthia Matuszek

Interactive Robotics and Language Lab
Computer Science and Electrical Engineering, UMBC

12:00-1:00pm Friday, 9 Oct. 2015, ITE 325b

A critical component of understanding human language is the ability to map words and ideas in that language to aspects of the external world. This mapping, called the symbol grounding problem, has been studied since the early days of artificial intelligence; however, advances in language processing, sensory, and motor systems have only recently made it possible to directly interact with tangibly grounded concepts. In this talk, I describe how we combine robotics and natural language processing to acquire and use physically grounded language specifically, how robots can learn to follow instructions, understand descriptions of objects, and build models of language and the physical world from interactions with users. I will describe our work on building a learning system that can ground English commands and descriptions from examples, making it possible for robots to learn from untrained end-users in an intuitive, natural way, and describe applications of our work in following directions and learning about objects. Finally, I will discuss how robots with these learning capabilities address a number of near-term challenges.

Cynthia Matuszek is an Assistant Professor at the University of Maryland, Baltimore County’s Computer Science and Electrical Engineering department. She completed her Ph.D. at the University of Washington in 2014, where she was a member of both the Robotics and State Estimation lab and the Language, Interaction, and Learning group. She is published in the areas of artificial intelligence, robotics, ubiquitous computing, and human-robot interaction. Her research interests include human-robot interaction, natural language processing, and machine learning.

Hosts: Professors Fow-Sen Choa () and Alan T. Sherman ()

· directions and more information ·

CSEE alumna Claudia Pearce asks what lies "Beyond Watson"

CSEE Alumna Dr. Claudia Pearce ('89 M.S., '94 Ph.D., Computer Science) has an article Beyond Watson in the Spring issue of UMBC Magazine. Dr. Pearce is a Senior Computer Science Authority at the National Security Agency and recipient of UMBC's 2014 Alumna of the Year Award in Engineering and Information Technology.

The article describes the technologies behind IBM's Watson Deep Question Answering system and it's significance for the development of future intelligent systems.

"Part of the answer may be found in a series of Beyond Watson workshops and other activities involving university, government, and industry partners, including one held at UMBC in February.
          We’re looking at things like natural language, which is at the core of the ultimate computer human interface. Think back to Star Trek, when Spock would say something like: "Computer, what is the probability the Klingons will attack the Enterprise in this sector of the galaxy?" The computer would often engage in a back and forth interaction with Spock, asking for more information or clarification before offering one or more scenarios (and probabilities supporting those scenarios) to aid the crew of the Enterprise in their next move. When we can ask a computer a question and then engage in a dialogue with it in this way, we can freely use computers to their best advantage. This sort of exchange allows us to hone in on an answer (or answers) to our question, supported by knowing both how the information was derived and what level of confidence to place in it.
          Envisioning such a model opens up new vistas. We may not be limited to retrieving existing answers from Wikipedia-like text, but use all available data to elicit answers to non-obvious questions, or to queries that have never been asked. We might also dispense with arcane interfaces that are poorly matched to both the task at hand and the needs of users and consumers."

 

Prof. Oates: Stop Fearing Artificial Intelligence

 

UMBC's Professor Tim Oates has a column on the online TechCrunch site describing why we should Stop Fearing Artificial Intelligence. Professor Oates has 20 years of experience working with a wide range of AI technologies, including machine learning, robotics and natural language processing. In the piece, Dr. Oates explains that

"As yet another tech pioneer with no connection to artificial intelligence steps out to voice his fears about AI being catastrophic for the human race, I feel the need respond. … Conflating facts of technology's rapid progress with a Hollywood understanding of intelligent machines is provocative (honestly, it's a favorite in my most-loved science fiction books and movies), but this technology doesn't live in a Hollywood movie, it isn't HAL or Skynet, and it deserves a grounded, rational look.

and discusses some of the limitations of current intelligent systems like IBM's Watson. Like most AI researchers, he's a believer in Strong AI — the idea that there is no theoretical reason why a machine can not exhibit behavior as skillful and flexible as humans — but doubts the such machines will be neccessarily dangerous.

"But let's suppose, for a second, that an AI does learn to think intelligently outside its programming and that it’s become discontent. Would this superhuman intelligence inherently go nuclear, or would it likely just slack off a little at work or, in extreme cases, compose rap music in Latin? In a world filled with a nearly infinite number of things a thinking entity can do to placate itself, it's unlikely "destruction of humanity" will top any AI's list."

Stop Fearing Artificial Intelligence is a well written and thought provoking article.

1 5 6 7