CSEE Professor Marie desJardins interviewed for Voices in AI podcast

Voices in AI – Episode 20: A Conversation with Marie desJardins

Byron Reese interviewed UMBC CSEE Professor Marie desJardins as part of his Voices in AI podcast series on Gigaom. In the episode, they talk about the Turing test, Watson, autonomous vehicles, and language processing.  Visit the Voices in AI site to listen to the podcast and read the interview transcript.

Here’s the start of the wide-ranging, hour long interview.

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today I’m excited that our guest is Marie des Jardins. She is an Associate Dean for Engineering and Information Technology as well as a professor of Computer Science at the University of Maryland, Baltimore County. She got her undergrad degree from Harvard, and a Ph.D. in computer science from Berkeley, and she’s been involved in the National Conference of the Association for the Advancement of Artificial Intelligence for over 12 years. Welcome to the show, Marie.

Marie des Jardins: Hi, it’s nice to be here.

I often open the show with “What is artificial intelligence?” because, interestingly, there’s no consensus definition of it, and I get a different kind of view of it from everybody. So I’ll start with that. What is artificial intelligence?

Sure. I’ve always thought about artificial intelligence as just a very broad term referring to trying to get computers to do things that we would consider intelligent if people did them. What’s interesting about that definition is it’s a moving target, because we change our opinions over time about what’s intelligent. As computers get better at doing things, they no longer seem that intelligent to us.

We use the word “intelligent,” too, and I’m not going to dwell on definitions, but what do you think intelligence is at its core?

So, it’s definitely hard to pin down, but I think of it as activities that human beings carry out, that we don’t know of lower order animals doing, other than some of the higher primates who can do things that seem intelligent to us. So intelligence involves intentionality, which means setting goals and making active plans to carry them out, and it involves learning over time and being able to react to situations differently based on experiences and knowledge that we’ve gained over time. The third part, I would argue, is that intelligence includes communication, so the ability to communicate with other beings, other intelligent agents, about your activities and goals.

Well, that’s really useful and specific. Let’s look at some of those things in detail a little bit. You mentioned intentionality. Do you think that intentionality is driven by consciousness? I mean, can you have intentionality without consciousness? Is consciousness therefore a requisite for intelligence?

I think that’s a really interesting question. I would decline to answer it mainly because I don’t think we ever can really know what consciousness is. We all have a sense of being conscious inside our own brains—at least I believe that. But of course, I’m only able to say anything meaningful about my own sense of consciousness. We just don’t have any way to measure consciousness or even really define what it is. So, there does seem to be this idea of self-awareness that we see in various kinds of animals—including humans—and that seems to be a precursor to what we call consciousness. But I think it’s awfully hard to define that term, and so I would be hesitant to put that as a prerequisite on intentionality.

UMBC researchers develop AI system to design clothing for your personal fashion style

 

AI system designs clothing for your personal fashion style

Everyone knows that more and more data is being collected about our everyday activities, like where we go online and in the physical world. Much of that data is being used for personalization. Recent UMBC CSEE Masters student Prutha Date explored a novel kind of personalization – creating clothing that matches your personal style.

Date developed a system that takes as input pictures of clothing in your closet, extracts a digitial representation of your style preferences, and then applies that style to new articles of clothing, like a picture pair of pants or a dress you find online. This work meshes well with recent efforts by Amazon to manufacture clothing on demand. Imagine being able to click on an article of clothing available online, personalize it to your style, and then have it made and shipped right to your door!

This innovative research was cited in a recent article in MIT Technology Review, Amazon Has Developed an AI Fashion Designer.

Tim Oates, a professor at the University of Maryland in Baltimore County, presented details of a system for transferring a particular style from one garment to another. He suggests that this approach might be used to conjure up new items of clothing from scratch. “You could train [an algorithm] on your closet, and then you could say here’s a jacket or a pair of pants, and I’d like to adapt it to my style,” Oates says.

Fashion designers probably shouldn’t fret just yet, though. Oates and other point out that it may be a long time before a machine can invent a fashion trend. “People innovate in areas like music, fashion, and cinema,” he says. “What we haven’t seen is a genuinely new music or fashion style that was generated by a computer and really resonated with people.”

You can read more about the work in a recent paper by Prutha Date, Ashwinkumar Ganesan and Tim Oates, Fashioning with Networks: Neural Style Transfer to Design Clothes. The paper describes how convolutional neural networks were used to personalize and generate new custom clothes based on a person’s preference and by learning their fashion choices from a limited set of clothes from their closet.

talk: Sarit Kraus on Computer Agents that Interact Proficiently with People, Noon Fri 8/4

 

Computer Agents that Interact Proficiently with People

Prof. Sarit Kraus
Deptartment of Computer Science, Bar-Ilan University
Ramat-Gan, 52900 Israel

12:00-1:00pm Friday, 4 August 2017, ITE ITE 217B, UMBC

Automated agents that interact proficiently with people can be useful in supporting, training or replacing people in complex tasks. The inclusion of people presents novel problems for the design of automated agents strategies. People do not necessarily adhere to the optimal, monolithic strategies that can be derived analytically. Their behavior is affected by a multitude of social and psychological factors. In this talk I will show how combining machine learning techniques for human modeling, human behavioral models, formal decision-making and game theory approaches enables agents to interact well with people. Applications include intelligent agents that help drivers reduce energy consumption, agents that support rehabilitation, employer-employee negotiation and agents that support a human operator in managing a team of low-cost mobile robots in search and rescue task

Sarit Kraus (Ph.D. Computer Science, Hebrew University, 1989) is a Professor and is the Department Chair of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems (including people and robots). In particular, she studies the development of intelligent agents that can interact proficiently with people. She studies both cooperative and conflicting scenarios. She considers modeling human behavior and predicting their decisions necessary for facing these challenges as well as the development of formal models for the agent’s decision making. She has also contributed to the research on agent optimization, homeland security, adversarial patrolling, social networks and nonmonotonic reasoning.

For her pioneer work she received many prestigious awards. She was awarded the IJCAI Computers and Thought Award, the ACM SIGART Agents Research award, the EMET prize and was twice the winner of the IFAAMAS influential paper award. She is an ACM, AAAI and ECCAI fellow and a recipient of the advanced ERC grant. She also received a special commendation from the city of Los Angeles, together with Prof. Tambe, Prof. Ordonez and their USC students, for the creation of the ARMOR security scheduling system. She has published over 350 papers in leading journals and major conferences. She is the author of the book “Strategic Negotiation in Multiagent Environments” (2001) and a co-author of the books “Heterogeneous Active Agents” (2000) and “Principles of Automated Negotiation” (2014). Kraus is a senior associate editor of the Annals of Mathematics and Artificial Intelligence Journal and an associate editor of the Journal of Autonomous Agents and Multi-Agent Systems and of JAIR. She is a member of the board of directors of the International Foundation for Multi-agent Systems (IFAAMAS).

UMBC’s Prof. Cynthia Matuszek receives NSF award for robot language acquisition

Professor Cynthia Matuszek has received a research award from the National Science Foundation to improve human-robot interactions by enabling them to understand the world from natural language in order to take instructions and learn about their environment naturally and intuitively. The two-year award, Joint Models of Language and Context for Robotic Language Acquisition, will support Dr. Matuszsek’s Interactive Robotics and Language Lab, which focuses on how robots can flexibly learn from interactions with people and environments.

As robots become smaller, less expensive, and more capable, they are able to perform an increasing variety of tasks, leading to revolutionary improvements in domains such as automobile safety and manufacturing. However, their inflexibility makes them hard to deploy in human-centric environments, such as homes and schools, where their tasks and environments are constantly changing. Meanwhile, learning to understand language about the physical world is a growing research area in both robotics and natural language processing. The core problem her research addresses is how the meanings of words are grounded in the noisy, perceptual world in which a robot operates.

The ability for robots to follow spoken or written directions reduces the adoption barrier for robots in domains such as assistive technology, education, and caretaking, where interactions with non-specialists are crucial. Such robots have the potential to ultimately improve autonomy and independence for populations such as aging-in-place elders; for example, a manipulator arm that can learn from a user’s explanation how to handle food or open novel containers would directly affect the independence of persons with dexterity concerns such as advanced arthritis.

Matuszek’s research will investigate how linguistic and perceptual models can be expanded during interaction, allowing robots to understand novel language about unanticipated domains. In particular, the focus is on developing new learning approaches that correctly induce joint models of language and perception, building data-driven language models that add new semantic representations over time. The work will combines semantic parser learning, which provides a distribution over possible interpretations of language, with perceptual representations of the underlying world. New concepts will be added on the fly as new words and new perceptual data are encountered, and a semantically meaningful model can be trained by maximizing the expected likelihood of language and visual components. This integrated approach allows for effective model updates with no explicit labeling of words or percepts. This approach will be combined with experiments on improving learning efficiency by incorporating active learning, leveraging a robot’s ability to ask questions about objects in the world.

UMBC computer scientists explain how AI can help translate legalese before online users click “agree”

 

Every day, people interact with large amounts of text online, including legal documents they might quickly skim and sign without full, careful review. In an article recently published in The Conversation, Karuna Joshi, research associate professor of computer science and electrical engineering, and Tim Finin, professor of computer science and electrical engineering, explain how artificial intelligence (AI) is helping to summarize lengthy and complex legalese so people can more easily understand terms of service and similar agreements before they click “accept” to access a new app or online service.

The legal documents that Joshi and Finin are working to summarize—terms of service, privacy policies, and user agreements—often accompany new online services, contests, apps, and subscriptions. “As computer science researchers, we are working on ways artificial intelligence algorithms could digest these massive texts and extract their meaning, presenting it in terms regular people can understand,” they explain.

Through their research, Joshi and Finin ask computers to break down the terms and conditions that regular users “agree” to or “accept.” To process the text, Joshi and Finin employ a range of AI technologies, including machine learning, knowledge representation, speech recognition, and human language comprehension.

Joshi and Finin have found that in many of the privacy policies people are prompted to review and accept online, there are sections that do not actually apply to the consumer or service provider. These sections of the agreements might, for example, “include rules for third parties…that people might not even know are involved in data storage or retrieval,” they note.

After examining these documents, the software Joshi and Finin have developed pinpoints specific items that people should be aware of when they are granting their consent or agreement—what they describe as “key information specifying the legal rights, obligations and prohibitions identified in the document.” In other words, the software takes in all that complex legal language, and then then presents just the most essential information in “clear, direct, human-readable statements,” making it much more feasible for users to understand what they are consenting to before they click “agree.”

Read “Teaching machines to understand — and summarize — text” in The Conversation to learn more about Joshi and Finin’s approach to making online legal documents more accessible through AI.

Adapted from a UMBC News article by Megan Hanks Banner image: Karuna Joshi. Photo by Marlayna Demond ’11 for UMBC.

Prof. Marie desJardins named by Forbes as one of 21 women who are advancing AI research

An article on Forbes’ site this week cites UMBC’s Professor Marie desJardins as one of 21 women who are advancing A.I. research. The article notes that artificial intelligence is “eating the world, transforming virtually every industry and function” and highlights women who are AI educators, researchers and business leaders who are driving the development and application of AI technology.

Professor desJardins joined the faculty at UMBC in 2001, after spending ten years as a research scientist in the Artificial Intelligence Center of SRI International in Menlo Park, California. She received her Ph.D. in computer science in 1991 from the University of California, Berkeley where her dissertation advanced autonomous learning systems in probabilistic domains.

The Forbes article states that

Marie desJardins has always been driven by broad, big-picture questions in AI rather than narrow technical applications. For her PhD dissertation at Berkeley, she worked on “goal-driven machine learning” where she designed methods an intelligent agent can use to figure out what and how to learn. As an Associate Dean and Professor at University of Maryland, Baltimore County (UMBC), desJardins has published over 120 scientific papers and won accolades for her teaching, but is equally proud of work she’s done with graduate students on self-organization and trust in multiagent systems.

When desJardins started her career, the AI and computing industry attracted more diverse, multi-disciplinary talent. Over time, she observed that conferences are “increasingly dominated with papers that focused almost exclusively on one subproblem (supervised classification learning) and much less welcoming of work in other subareas (active learning, goal-directed learning, applied learning, cognitive learning, etc),” which she is worried will exacerbate the diversity gap in AI.

“We are already seeing a reconsideration of more symbolic, representation-based approaches,” desJardins observes. “Ultimately I think that we will build more and more bridges between numerical approaches and symbolic approaches, and create layered architectures that take advantage of both.”

Her current research focuses on artificial intelligence, particularly machine learning, planning and decision making, and multi-agent systems. She has published over 125 scientific papers on these topics, and was recently named one of the “Ten AI Researchers to Follow on Twitter” by TechRepublic. At UMBC, she has been PI or co-PI on over $6,000,000 of external research funding, including a prestigious NSF CAREER Award, and has graduated 11 Ph.D. students and 25 M.S. students. She is particularly well known on campus and in her professional community for her commitment to student mentoring: she has been involved with the AAAI/SIGART Doctoral Consortium for the last 16 years and has worked with over 70 undergraduate researchers and four high school student interns. She was awarded the 2014 NCWIT Undergraduate Research Mentoring Award and the 2016 CRA Undergraduate Research Mentoring Award in recognition of her commitment to undergraduate research.

talk: Human-Like Strategies for Language-Endowed Intelligent Agents, 11am Fri 4/48, UMBC

The UMBC Center for Hybrid Multicore Productivity Research (CHMPR)
is pleased to present as part of our distinguished lecture series

Human-Like Strategies for Language-Endowed Intelligent Agents

Dr. Sergei Nirenburg
Professor of Cognitive Science
Rensselaer Polytechnic Institute

11:00am Friday, 28 April 2017, ITE 325b

 

Artificial intelligent agents functioning in human-agent teams must correctly interpret perceptual input and make appropriate decisions about their actions. These are arguably the two central problems in computational cognitive modeling. The RPI LEIA Lab builds language-endowed intelligent agents that extract meaning of text and dialog and use the results together with input from other perception modes, a long-term belief repository, rich models of the world and of other agents, and a model of the interaction situation to make decisions about actions. Specific phenomena we currently concentrate on include incrementality, treatment of unexpected input and non-literal language (e.g., metaphor), analysis of agent biases and “mindreading,” and deliberate concept learning. All these studies are characterized by our belief in the ultimate utility of building causal models of agent capabilities that are inspired by human strategies in language processing and decision-making that go beyond analogical reasoning. In this talk I will give an overview of our recent work in the above areas.

Sergei Nirenburg is Professor of Cognitive Science and Computer Science at the Rensselaer Polytechnic Institute. He also serves as Head of the Department of Cognitive Science. He has worked in the areas of cognitive science, artificial intelligence and natural language processing for over 35 years, leading R&D teams of up to 80. Dr. Nirenburg’s professional interests include developing computational models of human cognitive capabilities and implementing them in computer models of societies of human and computer agents, continuing development of the theory of ontological semantics, and the acquisition and management of knowledge about the world and about language. Academic R&D teams under Dr. Nirenburg’s leadership have implemented a variety of proof-of-concept and prototype application systems for cognitive modeling, intelligent tutoring and a variety of NLP tasks (machine translation, question answering, text summarization, information extraction, computational field linguistics, knowledge elicitation and learning). Dr. Nirenburg has written two and edited five books and published over 200 scholarly articles in journals and peer-reviewed conference proceedings.

Microsoft launches competition to create collaborative AI system to play Minecraft

 

A Microsoft Research team challenged PhD students to craft an advanced AI-based system that can collaborate with people in playing the popular Minecraft game, offering three $20K prizes. Minecraft was chosen because it offers an environment that, which relatively simple in some ways, it requires advances in areas that are still difficult for artificial computer agents to handle. The challenge asks questions like the following.

“How can we develop artificial intelligence that learns to make sense of complex environments? That learns from others, including humans, how to interact with the world? That learns transferable skills throughout its existence, and applies them to solve new, challenging problems?”

Microsoft’s Project Malmo addresses them by integrating deep reinforcement learning, cognitive science, and many AI ideas. The Malmo platform a sophisticated AI experimentation system built on top of Minecraft that is designed to support fundamental research in artificial intelligence.

A recentTechRepublic article, Microsoft competition asks PhD students to create advanced AI to play Minecraft, describes the competition and quotes UMBC Professor Marie desJardins on the project.

“Marie desJardins, AI professor at the University of Maryland, Baltimore County, sees Minecraft as an ‘interesting and challenging problem for AI systems, because of the fundamental complexity of the game environment, the open-ended nature of the scoring system, and the opportunity to collaborate with other game players (AIs or humans).’

But desJardins also raises concerns when it comes to these competitions. ‘Who owns the resulting intellectual property?” she asked. “Are these kinds of contests the best way for grad students to spend their time? Do these competitions foster or decrease diversity? Who ultimately profits from the contests?'”

The Malmo challenge is open to PhD students who register by April 14, 2017. After registration, teams of one to three members are given a task that consists of one or more mini-games. The goal is to develop an AI solution that learns how to work with other, randomly assigned players to achieve a high score in the game. Participants submit their solutions to GitHub by May 15, including a one-minute video that shows off the AI agent and summarizes what is interesting about their approach.

 


Microsoft’s Katja Hofmann discusses Project Malmo

IBM’s Arvind Krishna, Accelerating Technology Disruption: the Cognitive Revolution, 1pm Fri 2/24, UMBC

CSEE Department Distinguished Seminar

Accelerating Technology Disruption: The Cognitive Revolution

Dr. Arvind Krishna
Senior Vice President, Hybrid Cloud and Director, IBM Research

1:00-2:00pm, Friday, 24 February 2017, PAHB 132

Digital disruption is changing the world around us, breaking down traditional barriers to market entry, creating new business models, and leading to new solutions to global challenges. Dr. Arvind Krishna will examine some of the core emerging technologies driving this phenomenon today, with an emphasis on artificial intelligence/cognitive computing. He will also share his perspectives on what it takes to build a successful, high-impact technical career in an era of disruptive innovation.

Arvind Krishna is senior vice president, Hybrid Cloud, and director of IBM Research. In this role, he leads the company’s hybrid cloud business, including strategy, product design, offering development, marketing, sales and service. He also helps guide IBM’s overall technical strategy in core and emerging technologies including cognitive computing, quantum computing, cloud platform services, data-driven solutions and blockchain. Previously, Arvind was general manager of IBM Systems and Technology Group’s development and manufacturing organization, responsible for developing and engineering everything from advanced semiconductor materials to leading-edge microprocessors, servers and storage systems.

Earlier in his career, he was general manager of IBM Information Management, which included database, information integration and big data software solutions. Prior to that, he was vice president of strategy for IBM Software. He has held several key technical roles in IBM Software and IBM Research, where he pioneered IBM’s security software business. Arvind has an undergraduate degree from the Indian Institute of Technology, Kanpur and a Ph.D. from the University of Illinois at Urbana-Champaign. He is the recipient of a distinguished alumni award from the University of Illinois, is the co-author of 15 patents, has been the editor of IEEE and ACM journals, and has published extensively in technical conferences and journals.

Prof. Marie desJardins comments on the risks of autonomous weapon systems

UMBC’s Professor Marie desJardins was quoted recently in a TechRebublic article on the possible risks of adding more autonomy to weapons used by police and the military. The article focused on the novel use of a remotely controlled bomb-disposal robot by Dallas police to kill the suspect involved in the shooting of police officers. Although it was manually controlled by police officers, its use raised concerns about future devices that expected to have the capacity for independent decision making and actions.

Marie desJardins, AI professor at the University of Maryland in Baltimore County, agrees with Yampolskiy. “The real challenge will come when we start to put more autonomy into drones and assault robots. I don’t think that we should be building weapons that can, of their own accord, decide who to kill, and how to kill,” said desJardins.

“I think those decisions always need to be made by people—not just by individual people, but by processes in military organizations that have safeguards and accountability measures built into the process,” she said.

These issues were addressed by a recent series of workshops sponsored by the White House Office of Science and Technology Policy to learn more about the benefits and risks of artificial intelligence.

1 4 5 6 7