Prof. Cynthia Matuszek on how robots could help bridge the elder-care gap

File 20170825 19934 3c3fa
Robots can also lend a hand of sorts. Photographee.eu/Shutterstock.com

 

How robots could help bridge the elder-care gap

Cynthia Matuszek, University of Maryland, Baltimore County

Despite innovations that make it easier for seniors to keep living on their own rather than moving into special facilities, most elderly people eventually need a hand with chores and other everyday activities.

Friends and relatives often can’t do all the work. Growing evidence indicates it’s neither sustainable nor healthy for seniors or their loved ones. Yet demand for professional caregivers already far outstrips supply, and experts say this workforce shortage will only get worse.

So how will our society bridge this elder-care gap? In a word, robots.

Just as automation has begun to do jobs previously seen as uniquely suited for humans, like retrieving goods from warehouses, robots will assist your elderly relatives. As a robotics researcher, I believe artificial intelligence has the potential not only to care for our elders but to do so in a way that increases their independence and reduces their social isolation.

Personal robots

Robots can hand medicine to patients. UMBC Interactive Robotics and Language Lab, CC BY-SA

In the 2004 movie “I, Robot,” the robot-hating protagonist Del Spooner (played by Will Smith) is shocked to discover a robot in his grandmother’s house, baking a pie. You may have similar mental images: When many people imagine robots in the home, they envision mechanized domestic workers doing tasks in human-like ways.

In reality, many of the robots that will provide support for older adults who “age in place” – staying at home when they might otherwise be forced to relocate to assisted living or nursing homes – won’t look like people.

Instead, they will be specialized systems, akin to the Roomba, iRobot’s robotic vacuum cleaner and the first commercially successful consumer robot. Small, specific devices are not only easier to design and deploy, they allow for incremental adoption as requirements evolve over time.

Seniors, like everyone else, need different things. Many need help with the mechanics of eating, bathing, dressing and standing up – tasks known as “activities of daily living.” Along with daily help with cooking and managing their medications, they can benefit from a robotic hand with more intermittent things such as doing the laundry and getting to the doctor’s office.

That may sound far-fetched, but in addition to vacuuming robots can already mop our floors and mow our lawns. Experimental robots help lift people into and out of chairs and beds, follow recipes, fold towels and dispense pills. Soon, autonomous (self-driving) cars will ferry people to appointments and gatherings.

The kinds of robots already available include models that drive, provide pet-like social companionship and greet customers. Some of these technologies are already in limited trials in nursing homes, and seniors of course can already rely on their own Roombas.

Pepper, a social companion robot, in a retail environment. Tokumeigakarinoaoshima

Meanwhile, robot companions may soon help relieve loneliness and nudge forgetful elders to eat on a regular schedule.

Scientists and other inventors are building robots that will do these jobs and many others.

Round-the-clock care

While some tasks remain out of reach of today’s robots, such as inserting IVs or trimming toenails, mechanical caregivers can offer clear advantages over their human counterparts.

The most obvious one is their capacity to work around the clock. Machines, unlike people, are available 24/7. When used in the home, they can support aging in place.

Another plus: Relying on technology to meet day-to-day needs like mopping the floor can improve the quality of time elders spend with family and friends. Delegating mundane chores to robots also leaves more time for seniors to socialize with the people who care about them, and not just for them.

And since using devices isn’t the same as asking someone for help, relying on caregiving robots may lead seniors to perceive less lost autonomy than when they depend on human helpers.

Jenay Beer, a researcher at the University of South Carolina, advocates using robots to help elders age in place.

 

Interacting with robots

This brave new world of robot caregivers won’t take shape unless we make them user-friendly and intuitive, and that means interaction styles matter. In my lab, we work on how robots can interact with people by talking with them. Fortunately, recent research by the Pew Research Center shows that older adults are embracing technology more and more, just like everyone else.

Now that we are beginning to see robots that can competently perform some tasks, researchers like Jenay Beer, an assistant professor of computer science and engineering at the University of South Carolina, are trying to figure out which activities seniors need the most help with and what kinds of robots they might be most willing to use in the near term.

To that end, researchers are asking questions like:

But the fact is we don’t need all the answers before robots begin to help elders age in place.

Looking ahead

After all, there’s no time to lose.

The Census Bureau estimated that 15 percent of Americans – nearly one in six of us – were aged 65 or older in 2016, up from 12 percent in 2000. Demographers anticipate that by 2060 almost one in four will be in that age group. That means there will be some 48 million more elderly people in the U.S. than there are now.

I believe robots will perform many elder-care tasks within a decade. Some activities will still require human caregivers, and there are people for whom robotic assistance will never be the answer. But you can bet that robots will help seniors age in place, even if they won’t look like butlers or pastry chefs.

Cynthia Matuszek, Assistant Professor of Computer Science and Electrical Engineering, UMBC, University of Maryland, Baltimore County

This article was originally published on The Conversation. Read the original article.

talk: Sarit Kraus on Computer Agents that Interact Proficiently with People, Noon Fri 8/4

 

Computer Agents that Interact Proficiently with People

Prof. Sarit Kraus
Deptartment of Computer Science, Bar-Ilan University
Ramat-Gan, 52900 Israel

12:00-1:00pm Friday, 4 August 2017, ITE ITE 217B, UMBC

Automated agents that interact proficiently with people can be useful in supporting, training or replacing people in complex tasks. The inclusion of people presents novel problems for the design of automated agents strategies. People do not necessarily adhere to the optimal, monolithic strategies that can be derived analytically. Their behavior is affected by a multitude of social and psychological factors. In this talk I will show how combining machine learning techniques for human modeling, human behavioral models, formal decision-making and game theory approaches enables agents to interact well with people. Applications include intelligent agents that help drivers reduce energy consumption, agents that support rehabilitation, employer-employee negotiation and agents that support a human operator in managing a team of low-cost mobile robots in search and rescue task

Sarit Kraus (Ph.D. Computer Science, Hebrew University, 1989) is a Professor and is the Department Chair of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems (including people and robots). In particular, she studies the development of intelligent agents that can interact proficiently with people. She studies both cooperative and conflicting scenarios. She considers modeling human behavior and predicting their decisions necessary for facing these challenges as well as the development of formal models for the agent’s decision making. She has also contributed to the research on agent optimization, homeland security, adversarial patrolling, social networks and nonmonotonic reasoning.

For her pioneer work she received many prestigious awards. She was awarded the IJCAI Computers and Thought Award, the ACM SIGART Agents Research award, the EMET prize and was twice the winner of the IFAAMAS influential paper award. She is an ACM, AAAI and ECCAI fellow and a recipient of the advanced ERC grant. She also received a special commendation from the city of Los Angeles, together with Prof. Tambe, Prof. Ordonez and their USC students, for the creation of the ARMOR security scheduling system. She has published over 350 papers in leading journals and major conferences. She is the author of the book “Strategic Negotiation in Multiagent Environments” (2001) and a co-author of the books “Heterogeneous Active Agents” (2000) and “Principles of Automated Negotiation” (2014). Kraus is a senior associate editor of the Annals of Mathematics and Artificial Intelligence Journal and an associate editor of the Journal of Autonomous Agents and Multi-Agent Systems and of JAIR. She is a member of the board of directors of the International Foundation for Multi-agent Systems (IFAAMAS).

UMBC PhD candidate Kavita Krishnaswamy gets Google & Microsoft awards for robotics research

 

UMBC Ph.D. candidate Kavita Krishnaswamy receives
Google and Microsoft awards for robotics research

 

Kavita Krishnaswamy ’07, computer science and mathematics, Ph.D. ’18, computer science, has been named both a 2017 Microsoft Fellow and recipient of the Google Lime Scholarship. These prestigious honors recognize emerging scholars in computing who are dedicated to increasing diversity in the field, and Krishnawamy’s awards will support her Ph.D. research on “Smart Algorithms via Knowledge Management of Safe Physical Human-Robotic Care.”

“I am very humbled, honored, and grateful to Google Lime and Microsoft for providing me with this enriching experience and lifetime opportunity to serve society by advancing the field of human-robot interaction,” Krishnaswamy says.

The Google Lime Scholarship seeks to promote greater access to knowledge for people with visible and invisible disabilities. It was established through a partnership between Google and Lime Connect, a nonprofit focused on breaking stereotypes about disability and encouraging companies to recognize the importance and value of employing people with disabilities.

The program encourages students with disabilities to pursue their passions in computing and technology, and to become leaders in those fields. As part of the fellowship program, Krishnaswamy will receive scholarship funding and will participate in the 2017 Google Scholars’ Retreat.

Through the Microsoft Fellowship, Krishnaswamy will receive funding to support her Ph.D. research and will participate in the Microsoft Research workshop held in fall 2017. Her research currently focuses on building a teleoperated mobile robotic prototype, in addition to creating an accessible robotic interface, that seniors and people with disabilities will be able to control by repositioning their arms and legs. Tim Oates, professor of computer science and electrical engineering, is Krishnaswamy’s Ph.D. advisor.

“Our goal is to explore the intersection between providing physical care and robotics, and how it is possible to translate safe patient handling and mobility guidelines into smart human-robotic interaction algorithms,” Krishnaswamy explains. “As assistive robotics become more mainstream, these best practices can improve safety in direct physical care in the process of repositioning the human body with a mobile robotic arm.”

Krishnaswamy is excited about the possible new directions her research can now explore, thanks to support from these awards. “The resources will provide me with a solid and steady foundation to cultivate new technical expertise and professional skills to successfully continue my dissertation research in robotics, and to broaden my knowledge in the field,” she says.

Krishnaswamy has been recognized internationally as an emerging leader in robotics and accessibility design throughout her graduate studies. She is a former Ford Foundation Predoctoral Fellow, and a National Science Foundation Graduate Research Fellow. In 2015, she was named to Robohub’s “25 Women in Robotics You Need to Know About” list.

This post was adapted from a UMBC News article written by Megan Hanks. Image by Kavita Krishnaswamy.

UMBC’s Prof. Cynthia Matuszek receives NSF award for robot language acquisition

Professor Cynthia Matuszek has received a research award from the National Science Foundation to improve human-robot interactions by enabling them to understand the world from natural language in order to take instructions and learn about their environment naturally and intuitively. The two-year award, Joint Models of Language and Context for Robotic Language Acquisition, will support Dr. Matuszsek’s Interactive Robotics and Language Lab, which focuses on how robots can flexibly learn from interactions with people and environments.

As robots become smaller, less expensive, and more capable, they are able to perform an increasing variety of tasks, leading to revolutionary improvements in domains such as automobile safety and manufacturing. However, their inflexibility makes them hard to deploy in human-centric environments, such as homes and schools, where their tasks and environments are constantly changing. Meanwhile, learning to understand language about the physical world is a growing research area in both robotics and natural language processing. The core problem her research addresses is how the meanings of words are grounded in the noisy, perceptual world in which a robot operates.

The ability for robots to follow spoken or written directions reduces the adoption barrier for robots in domains such as assistive technology, education, and caretaking, where interactions with non-specialists are crucial. Such robots have the potential to ultimately improve autonomy and independence for populations such as aging-in-place elders; for example, a manipulator arm that can learn from a user’s explanation how to handle food or open novel containers would directly affect the independence of persons with dexterity concerns such as advanced arthritis.

Matuszek’s research will investigate how linguistic and perceptual models can be expanded during interaction, allowing robots to understand novel language about unanticipated domains. In particular, the focus is on developing new learning approaches that correctly induce joint models of language and perception, building data-driven language models that add new semantic representations over time. The work will combines semantic parser learning, which provides a distribution over possible interpretations of language, with perceptual representations of the underlying world. New concepts will be added on the fly as new words and new perceptual data are encountered, and a semantically meaningful model can be trained by maximizing the expected likelihood of language and visual components. This integrated approach allows for effective model updates with no explicit labeling of words or percepts. This approach will be combined with experiments on improving learning efficiency by incorporating active learning, leveraging a robot’s ability to ask questions about objects in the world.

Are you interested in assistive robotics research?

Are you interested in assistive robotics research?

Kavita Krishnaswamy is a Ph.D. candidate in the UMBC Computer Science program and has Spinal Muscular Atrophy (SMA), a neuromuscular disorder that affects the control of muscle movement.

Her goal is to develop robotics aids to increase independence for people with physical disabilities like herself. As part of her research she is conducting a survey on attitudes toward robotic aids and how they may improve the quality of life for those with physical disabilities, their family members, and their caregivers.

If you have a physical disability, are a caregiver for a person with a physical disability, or are a friend or family member of a person with a physical disability, you can help Kavita with her research by particiating in the survey. Participation is voluntary and anonymous. The participant must be 18 years or older. You can access the survey here.

This study has been reviewed and approved by the UMBC Institutional Review Board (IRB). A representative of that Board, from the Office.for Research Protections and Compliance, is available to discuss the review process or Kavita’s rights as a research participant. Contact information of the Office is (410) 455-2737 or .

Prof. Marie desJardins comments on the risks of autonomous weapon systems

UMBC’s Professor Marie desJardins was quoted recently in a TechRebublic article on the possible risks of adding more autonomy to weapons used by police and the military. The article focused on the novel use of a remotely controlled bomb-disposal robot by Dallas police to kill the suspect involved in the shooting of police officers. Although it was manually controlled by police officers, its use raised concerns about future devices that expected to have the capacity for independent decision making and actions.

Marie desJardins, AI professor at the University of Maryland in Baltimore County, agrees with Yampolskiy. “The real challenge will come when we start to put more autonomy into drones and assault robots. I don’t think that we should be building weapons that can, of their own accord, decide who to kill, and how to kill,” said desJardins.

“I think those decisions always need to be made by people—not just by individual people, but by processes in military organizations that have safeguards and accountability measures built into the process,” she said.

These issues were addressed by a recent series of workshops sponsored by the White House Office of Science and Technology Policy to learn more about the benefits and risks of artificial intelligence.

talk: Human mental models and robots: Grasping and tele-presence, 11am 5/9

apple picker

Human mental models and robots:
Grasping and tele-presence

Dr. Cindy Grimm, Oregon State University

11:00-12:00 Monday 9 May 2016, ITE 325b

In this talk I will cover two separate research efforts in robotics, both of which use human mental models to improve robotic functionality. Robots struggle to pick up and manipulate physical objects, yet humans do this with ease – but can’t tell you how they do it. In this research we focus on how to capture human data in such a way as to gain insight into how people structure the grasping task. Specifically, we look at the role of perceptual cues in evaluating grasps and mental classification models of grasps (i.e., all these grasps are the “same”). In the second half of the talk I will switch to discussing how human mental models of privacy, trust, and presence come in to play in remote tele-presence applications (“Skype-on-a-movable-stick”).

Dr. Cindy Grimm is currently an associate professor at Oregon State University (since 2013) in the School of Mechanical, Industrial, and Manufacturing Engineering (application area robotics). Prior to that she was tenured faculty at Washington University in St. Louis in Computer Science (12 years). Her research areas range from 3D sketching to biological modeling to human-robot interaction. She approaches these problems with a combination of mathematical models and empirically-verified human-centered design (HCD). Mathematical models provide a sound, quantitative, rigorous, elegant basis for representing shape and function, and are a core part of the “language” of computation. Including a human in the loop is a key component of the application areas she works in; HCD provides the mechanism for addressing the fundamental problem of how to make mathematical computation “useful” for humans. She has worked with collaborators in fields ranging from psychology, mechanical and biological engineering, statistics, to art.

talk: Learning models of language, action and perception for human-robot collaboration

Learning models of language, action and perception
for human-robot collaboration

Dr. Stefanie Tellex
Department of Computer Science, Brown University

4:00pm Monday, 7 March 2016, ITE325b

Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets.  To achieve complex tasks, it is essential for robots to move beyond merely interacting with people and toward collaboration, so that one person can easily and flexibly work with many autonomous robots.  The aim of my research program is to create autonomous robots that collaborate with people to meet their needs by learning decision-theoretic models for communication, action, and perception.  Communication for collaboration requires models of language that map between sentences and aspects of the external world. My work enables a robot to learn compositional models for word meanings that allow a robot to explicitly reason and communicate about its own uncertainty, increasing the speed and accuracy of human-robot communication.  Action for collaboration requires models that match how people think and talk, because people communicate about all aspects of a robot’s behavior, from low-level motion preferences (e.g., “Please fly up a few feet”) to high-level requests (e.g., “Please inspect the building”).  I am creating new methods for learning how to plan in very large, uncertain state-action spaces by using hierarchical abstraction.  Perception for collaboration requires the robot to detect, localize, and manipulate the objects in its environment that are most important to its human collaborator.  I am creating new methods for autonomously acquiring perceptual models in situ so the robot can perceive the objects most relevant to the human’s goals. My unified decision-theoretic framework supports data-driven training and robust, feedback-driven human-robot collaboration.

Stefanie Tellex is an Assistant Professor of Computer Science and Assistant Professor of Engineering at Brown University.  Her group, the Humans To Robots Lab, creates robots that seamlessly collaborate with people to meet their needs using language, gesture, and probabilistic inference, aiming to empower every person with a collaborative robot.  She completed her Ph.D. at the MIT Media Lab in 2010, where she developed models for the meanings of spatial prepositions and motion verbs.  Her postdoctoral work at MIT CSAIL focused on creating robots that understand natural language.  She has published at SIGIR, HRI, RSS, AAAI, IROS, ICAPs and ICMI, winning Best Student Paper at SIGIR and ICMI, Best Paper at RSS, and an award from the CCC Blue Sky Ideas Initiative.  Her awards include being named one of IEEE Spectrum’s AI’s 10 to Watch in 2013, the Richard B. Salomon Faculty Research Award at Brown University, a DARPA Young Faculty Award in 2015, and a 2016 Sloan Research Fellowship.  Her work has been featured in the press on National Public Radio and MIT Technology Review; she was named one of Wired UK’s Women Who Changed Science In 2015 and listed as one of MIT Technology Review’s Ten Breakthrough Technologies in 2016.

HackUMBC 24 -hour student hackathon, 5-6 March 2016 at UMBC

HackUMBC2016

HackUMBC is a 24 hour student hackathon that will take place on Saturday and Sunday, March 5-6, 2016 at UMBC. It’s an opportunity to learn new skills, make friends, create your wildest idea, and share it with the world. Build an app, a website, a robotic arm, a game, anything. It’s free and food, beverages, swag, workspaces, and sleeping areas will be provided. All undergraduate, graduate, and high school students are welcome, but pre-registration is required. Get more information and apply at https://hackumbc.org/.

New spring course: Principles of Human-Robot Interaction

Principles of Human-Robot Interaction

CSEE professor Cynthia Matuszek will teach a new special topics course this spring on Principles of Human-Robot Interaction. The graduate level course (CMSC 691-08) will meet on Tuesday and Thursdays from 4:00 to 5:30pm in 013 Sherman Hall.


 

Principles of Human-Robot Interaction

An introduction to robots in our daily lives

CMSC691-08, 4:00-5:15pm Tue/Thr, starting 26 January 2016, UMBC

Robots are becoming ubiquitous. From Roombas in our homes, to surgical robots in hospitals, to giant manipulators that assemble cars, robots are everywhere. In the past, robots have only ever interacted with highly trained experts. Now, as they are being deployed more widely, we must address new questions about how our robots can interact day-to-day with end users — non-experts — safely, usefully, and pleasantly. This new area of research is called Human-Robot Interaction, or HRI.

This 3-credit special topics course aims to introduce students to current research in HRI and provide hands-on experience with HRI research. Students will explore the diverse range of research topics in this area, learn to identify HRI problems in their own research, and carry out a collaborative project involving human-robot interactions. Topics to be covered include:

  • Social robots: how can robots be social beings? When do we want them to?
  • Human-robot collaboration: humans and robots working together on tasks
  • Natural-language interactions with robots and human-robot dialog
  • Telerobotics: the uses of remote presence and teleoperation
  • Expressive robots: how can robots express emotion – and should they?

Students may benefit from having some previous coursework or experience in AI, machine learning, or robotics, but none are necessary. Undergraduate students can enroll with the instructor’s permission. For more information, contact Dr. Matuszek at cmat at umbc.edu.

1 2 3