Free workshop on using the Arduino microcontroller, Sat. 11/14 and 11/21

arduino

The UMBC IEEE Branch will hold an Arduino workshop on Saturday November 14th and next Saturday November 21st from 2:00-6:00pm in SHER 003 (Lecture Hall 4). It’s a great opportunity for people to learn about microcontrollers and circuit basics and how to use Arduino for building cyber-physical systems for home automation, robotics, games and more.

The Arduino microcontroller is a great device for anyone who wants to learn more about technology. It is used in a variety of fields in research and academia and may even help you get an internship. Our instructors have used the Arduino for researching self-replicating robots and remote-controlled helicopters, hacking into a vehicle’s control system, and using radars to detect human activity in a room. Some of the hackathon projects by our IEEE members include developing a drink mixer that wirelessly connects with a Tesla Model S and a full-body haptic feedback suit for the Oculus Rift. The Arduino is a wonderful tool and is fairly easy to use. Everyone should learn how to use it!

UMBC’s Institute of Electrical and Electronics Engineers is hosting two Level 1 workshops this semester. They are hosted this Saturday (Nov. 14th) and next Saturday (Nov. 21st). The workshop will be SHER 003 (Lecture Hall 4) from 2pm to 6pm. Please register online to sign up for either workshop. Contact Sekar Kulandaivel () if you have any questions.

The workshop is open to all majors (minimum coding experience recommended). You only need to bring your laptop and charger and download and install the Arduino IDE. We hope to see many of you this weekend! You REALLY don’t want to miss out on this opportunity.

talk: Grounded Language Acquisition: A Physical Agent Approach, Fri 10/9

The UMBC CSEE Seminar Series Presents

Grounded Language Acquisition: A Physical Agent Approach

Dr. Cynthia Matuszek

Interactive Robotics and Language Lab
Computer Science and Electrical Engineering, UMBC

12:00-1:00pm Friday, 9 Oct. 2015, ITE 325b

A critical component of understanding human language is the ability to map words and ideas in that language to aspects of the external world. This mapping, called the symbol grounding problem, has been studied since the early days of artificial intelligence; however, advances in language processing, sensory, and motor systems have only recently made it possible to directly interact with tangibly grounded concepts. In this talk, I describe how we combine robotics and natural language processing to acquire and use physically grounded language specifically, how robots can learn to follow instructions, understand descriptions of objects, and build models of language and the physical world from interactions with users. I will describe our work on building a learning system that can ground English commands and descriptions from examples, making it possible for robots to learn from untrained end-users in an intuitive, natural way, and describe applications of our work in following directions and learning about objects. Finally, I will discuss how robots with these learning capabilities address a number of near-term challenges.

Cynthia Matuszek is an Assistant Professor at the University of Maryland, Baltimore County’s Computer Science and Electrical Engineering department. She completed her Ph.D. at the University of Washington in 2014, where she was a member of both the Robotics and State Estimation lab and the Language, Interaction, and Learning group. She is published in the areas of artificial intelligence, robotics, ubiquitous computing, and human-robot interaction. Her research interests include human-robot interaction, natural language processing, and machine learning.

Hosts: Professors Fow-Sen Choa () and Alan T. Sherman ()

· directions and more information ·

talk: Keith Clark, Programming Robotic Agents, 2pm Fri 10/2, ITE325

Baxter is an industrial robot built by Rethink Robotics, a start-up company founded by Rodney Brooks. It was introduced in September 2012. Baxter is a 3-foot tall (without pedestal; 5'10" - 6'3" with pedestal), two-armed robot with an animated face.

Programming Robotic Agents: A Multi-tasking Teleo-Reactive Approach

Keith Clark, Imperial College London
University of Queensland, University New South Wales
joint work with Peter Robinson, University of Queensland

2:00pm Friday, 2 October 2015, ITE325b

We present a multi-threaded/multi-tasking message communicating robotic agent architecture in which the concurrently executing tasks are programmed in TeleoR, a major extension of Nilsson’s Teleo-Reactive Procedures (TR) guard ~> action rule language for robotic agents.

The rule guards query rapidly changing percept facts, and more slowly changing told and remembered facts, using fixed facts, relation and function rules (the agent’s knowledge) in the agent’s deductive BeliefStore. Its operational semantics makes the languages well suited to robot/robot or human/robot co-operative tasks.

TeleoR extends TR in:

  • being typed and higher order,
  • having a typed higher order LP/FP language, QuLog, for encoding BeliefStore knowledge,
  • having extra forms of rules and actions, and o having task atomic procedures to control the deadlock and starvation free sharing of several robotic resources by concurrently executing tasks.

Its use is illustrated in the video at http://bit.ly/teleor. It is being used at UNSW to write the control program for a two armed Baxter robot working in co-operation with a person concurrently engaged in several assembly tasks.

Keith Clark is Emeritus Professor of Computer Science at Imperial College London, England and a Visiting Professor at the University of Queensland and the University New South Wales. He has lectured in both mathematics and computer science.

Host: Tim Finin

Upcoming talks and directions

Opportunities through robotics: Kavita Krishnaswamy ’07

An interview with UMBC Computer Science Ph.D. student Kavita Krishnaswamy appeared in a recent post on the UMBC Alumni Blog.

Every so often, we’ll chat with an alum about what they do and how they got there. Today we’re talking with Kavita Krishnaswamy ’07, mathematics and computer science. Krishnaswamy has spinal muscular atrophy and has not been able to leave her house in six years. Thanks to Beam Telepresence Technology, a robotic program that allows her to remotely view and navigate spaces through her computer screen, she’s presented her doctoral thesis and attended conferences across the country. The current Ph.D. student talks about her experience with the Beam and her research on robotics and accessibility.

Read the full interview on the UMBC alumni blog.

Robotic assistive devices for independent living

Accessibility symbol

CSEE PhD student Kavita Krishnaswamy and Prof. Tim Oates write about their research using brain-computer interfaces and speech recognition tools to control robotic to assist individuals with reduced muscular strength. The piece, Robotic assistive devices for independent living, appeard in Robohub, "a non-profit online communication platform that brings together experts in robotics research, start-ups, business, and education from across the globe."

They describe their motivation as follows.

"One of the most craved aspects of the human experience is to be independent: the abilitiy to take care of one's self establishes a sense of dignity, inherent freedom, and profound independence. Our goal is to bring robotic assistive devices into the real world where they can support individuals with severe disabilities and alleviate the workload of caregivers, with the ultimate vision of helping people with severe physical disabilities to achieve physical independence without relying on others. As robotic assistive devices become ubiquitous, they will enable people with severe physical disabilities to confidently use technology in their daily lives, not just to survive, but to flourish."

They demonstrated the feasibility of integrating a brain-computer interface with speech recognition for self-directed arm repositioning tasks through a robotic interface for repositioning the simulated arm of an avatar using a Emotiv Epoc headset and Dragon NaturallySpeaking voice recognition software.

1 2 3