MS defense: S. Viseh, Low Power On-board Processor for A Tongue Assistive Device, 12pm Tue 8/5

MS Thesis Defense

A Low Power On-board Processor
for A Tongue Assistive Device

Sina Viseh

12:00 pm Tuesday, 5 August 2014, ITE 325B

In biomedical wearable devices, patient’s convenience and accuracy are the main priorities. To fulfill the patient’s convenience requirement, the power consumption, which directly translates to the battery lifetime and size, must be kept as low as possible. Meanwhile, adopted improvements should not impact the accuracy. Therefore, focus on reducing the energy consumption within these devices has already been the subject of a significant amount of research in the past few years. In most wearable devices, all raw data is transmitted to a computer to carry out the required processing. This vast amount of communication leads to a considerable amount of power consumption and the need for a bulky battery, which hinders the device’s practicality and patient’s convenience. Tongue Drive System (TDS) is a new unobtrusive, wireless, and wearable assistive device that allows for real time tracking of the voluntary tongue motion in the oral space for communication, control, and navigation applications. The intraoral TDS clasps to the upper teeth and resists sensor misplacement. However, the iTDS has more restrictions on its dimensions, limiting the battery size and consequently requiring a considerable reduction in its power consumption to operate over an extended period of two days on a single charge. In this thesis, we propose an ultra low power local processor for the TDS that performs all signals processing on the transmitter side, following the sensors. Implementing the computational engine reduces the data volume that needs to be wirelessly transmitted to a PC or smartphone by a factor of 30x, from 12 kbps to ~400 bps. The proposed design is implemented on an ultra low power IGLOO nano FPGA and is tested on AGLN250 prototype board. According to our post place and route results, implementing the engine on the FPGA significantly drops the required data transmission, while an ASIC implementation in 65 nm CMOS results in 0.128 mW power consumption and occupies a 0.02 footprint. To explore a different architecture, we mapped our proposed TDS processor on the EEHPC many-core. The many-core has a flexible and time saving design procedure. As a result of having a local processor, the power consumption and size of the iTDS will be significantly reduced through the use of a much smaller rechargeable battery. Moreover, the system can operate longer following every recharge, improving the iTDS usability.

Committee: Dr. Tinoosh Mohsenin (chair), Tim Oates and Mohamed Younis

MS defense: C. Shah, Usability Study of the Pico Authentication Device, 2pm Mon 8/4

pico-happy 700

MS Thesis Defense

A Usability Study of the Pico Authentication Device:
User Reactions to Pico Emulated on an Android Phone

Chirag Shah

2:00pm Monday, 4 August 2014, ITE 346

We emulate the Pico authentication token on the Android Smartphone and evaluate its usability through a casual survey of users. In 2011, Stajano proposed Pico as a physical token-based authentication system to replace traditional passwords. As far as we know, Pico has never been implemented nor tested by users. We evaluate the usability of our emulation of Pico by a comparative study in which each user creates and authenticates herself to three online accounts twice: once using Pico, and once using passwords. The study measures the accuracy, efficiency, and satisfaction of users in these tasks. Pico offers many advantages over passwords, including human-memory- and physically-effortless tasks, no typing, and high security. Based on public-key cryptography, Pico’s security design ensures that no credential ever leaves the Pico token unencrypted.

In summer 2014 we conducted a survey with 23 subjects from the UMBC community. Each subject carried out scripted tasks involving authentication, separately using our Pico emulator and a traditional password system. We measured the time and accuracy with which subjects carry out these tasks, and asked each subject to complete a survey. The survey instrument included ten Likert-scale questions and free responses and a demographics questionnaire. We then analyzed these data to find that subjects reacted positively to the Pico emulator in their responses to the Likert questions. By statistical analysis of the reactions and measurements gathered in this study we observed that subjects found the system accurate, efficient and were satisfactory.

Committee: Dr. Alan Sherman (chair), Kostas Kalpakis, Charles Nicholas and Dhananjay Phatak

PhD proposal: C. Grasso, Information Extraction from Clinical Notes, 11am Mon 8/4

grasso

PhD Dissertation Proposal

“S:PT.-HAS NO PMD.”
Information Extraction from Clinical Notes

Clare Grasso

11:00am Monday, 4 August 2014, ITE 325b

Clinical decision support (CDS) systems aid clinical decision making by matching an individual patient’s data to a computerized knowledge base in order to present clinicians with patient-specific recommendations. The need for methods to extract the clinical information in the free-text portions of the clinical record into a form that clinical decision support systems could access and utilize has been identified as one of the top five grand challenges in clinical decision support. This research focuses on investigating scalable machine learning and semantic techniques that do not rely on an underlying grammar to extract medical concepts in the text in order to apply them in CDS on commodity hardware and software systems. Additionally, by packaging the extracted data within a semantic representation, the facts can be combined with other semantically encoded facts and reasoned over. This allows other clinically relevant facts to be inferred which are not directly mentioned in the text and presented to the clinician for decision making.

Committee: Drs. Anupam Joshi (chair), Tim Finin, Aryya Gangopadhyay, Charles Nicholas, Claudia Pearce and Eliot Siegel

MS defense: P. Pappachan, Remedy: A Semantic and Collaborative Approach to Community Health-Care, 10am Thr 7/31

MS Thesis Defense

Remedy: A Semantic and Collaborative
Approach to Community Health-Care

Primal Pappachan

10:00am Thursday, 31 July 2014, ITE 325b

Community Health Workers (CHWs) act as liaisons between health-care providers and patients in underserved or un-served areas. However, the lack of information sharing and training support impedes the effectiveness of CHWs and their ability to correctly diagnose patients. In this thesis, we propose and describe a system for mobile and wearable computing devices called Remedy which assists CHWs in decision making and facilitates collaboration among them. Remedy can infer possible diseases and treatments by representing the diseases, their symptoms, and patient context in OWL ontologies and by reasoning over this model. The use of semantic representation of data makes it easier to share knowledge such as disease, symptom, diagnosis guidelines, and demography related information, between various personnel involved in health-care (e.g., CHWs, patients, health-care providers). We describe the Remedy system with the help of a motivating community health-care scenario and present an Android prototype for smart phones and Google Glass.

Committee: Drs. Anupam Joshi (chair), Tim Finin, Michael Grasso, Aryya Gangopadhyay

MS defense: M. Madeira, Analyzing Opinions in the Mom Community on Youtube, 2pm Wed 7/30

morgan

MS Thesis Defense

Analyzing Opinions in the Mom Community on Youtube

Morgan Madeira

2:00pm Wednesday, 30 June 2014, ITE 325b

The “Mom Community” on YouTube consists of a large group of parents that share their parenting beliefs and experiences to connect and share information with others. Although there is a lot of positive support in this community, it is often a hotspot for debate of controversial parenting topics. Many of these topics have one side that represents the belief of “crunchy” moms. Crunchy is a term used to describe parents that intentionally choose natural parenting methods and eco-friendly products to raise their children. Debate over these practices has led to “mompetition” and the idea that there is a right way to parent. This research investigates these claims such as how different crunchy topics are discussed and how the community has changed over time. Video comments and user data are collected from YouTube and used to understand parenting practices and opinions in the mom community.

Committee: Drs. Anupam Joshi (chair), Tim Finin, Karuna Joshi

MS defense: A. Hendre, Cloud Security Control Recommendation System, 8:30 Thr 7/31

css

MS Thesis Defense

Comparison of Cloud Security Standards and a
Cloud Security Control Recommendation System

Amit S. Hendre

8:30am Thursday, 31 July 2014, ITE346

Cloud services are becoming an essential part of many organizations. Cloud providers have to adhere to security and privacy policies to ensure their users’ data remains confidential and secure. On one hand, cloud providers are implementing their own security and privacy controls. On the other hand, standards bodies like Cloud Security Alliance (CSA), International Organization for Standards (ISO), National Institute for Standards and Technology (NIST), etc. are developing broad standards for cloud security. In this thesis we provide a comprehensive analysis of the cloud security standards that are being developed and how they compare with the security controls of cloud providers. Our study is mainly focused on policies about mobility of resources, identity and access management, data protection, incident response and audit and assessment. This thesis will help consumer organizations with their compliance needs by evaluating the security controls and policies of cloud providers and assisting them in identifying their enterprise cloud security policies.

Committee: Drs. Karuna Joshi, Tim Finin and Yelena Yesha

Phd proposal: Lisa Mathews, Creating a Collaborative Situational-Aware IDPS, 11am Tue 6/10

Switch-and-nest, wikipedia commons

Ph.D. Dissertation proposal

Creating a Collaborative Situational-Aware IDPS

Lisa Mathews

11:00am Tuesday, 10 June 2014, ITE 346

Traditional intrusion detection and prevention systems (IDPSs) have well known limitations that decrease their utility against many kinds of attacks. Current state-of-the-art IDPSs are point based solutions that perform a simple analysis of host or network data and then flag an alert. Only known attacks whose signatures have been identified and stored in some form can be discovered by most of these systems. They cannot detect “zero day” type attacks or attacks that use “low-and-slow” vectors. Many times an attack is only revealed by post facto forensics after some damage has already been done.

To address these issues, we are developing a semantic approach to intrusion detection that uses traditional as well non-traditional sensors collaboratively. Traditional sensors include hardware or software such as network scanners, host scanners, and IDPSs like Snort. Potential non-traditional sensors include open sources or information such as online forums, blogs, and vulnerability databases which contain textual descriptions of proposed attacks or discovered exploits. After analyzing the data streams from these sensors, the information extracted is added as facts to a knowledge base using a W3C standards based ontology that our group has developed. We have also developed rules/policies that can reason over the facts to identify the situation or context in which an attack can occur. By having different sources collaborate to discover potential security threats and create additional rules/policies, the resulting situational-aware IDPS is better equipped to stop creative attacks such as those that follow a low-and-slow intrusion pattern. Leveraging information from these heterogeneous sources leads to a more robust, situational-aware IDPS that is better equipped to detect complicated attacks. This will allow for detection in soft real time. We will create a prototype of this system and test the efficiency and accuracy of its ability to detect complex malware.

Committee: Drs. Anupam Joshi (Chair), Tim Finin, John Pinkston, Charles Nicholas, Claudia Pearce, Yul Williams

PhD defense: Oleg Aulov, Human Sensor Networks for Disasters, 11am Thr 5/29

sandy700

Ph.D. Dissertation Defense
Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Human Sensor Networks for Disasters

Oleg Aulov

11:00am Thursday, 29 May 2014, ITE325b, UMBC

This dissertation, presents a novel approach that utilizes quantifiable social media data as a human aware near real-time observing system coupled with geophysical predictive models for improved response to disasters and extreme events. It shows that social media data has the potential to significantly improve disaster management beyond informing the public and emphasizes the importance of different roles that social media can play in management, monitoring, modeling and mitigation of natural and human-caused disasters.

In the proposed approach, social media sources are viewed as a Human Sensor Network, and Social Media users are viewed as “human sensors” that are “deployed” in the field, and their posts are considered to be “sensor observations”. I have utilized the “human sensor observations”, i.e. data acquired from social media, as boundary value forcings to show improved geophysical model forecasts of extreme disaster events when combined with other scientific data such as satellite observations and sensor measurements. In addition, I have developed a system called ASON maps that dynamically combines model forecast outputs with specified social media observations and physical measurements to define the regions of event impacts such as flood distributions and levels, beached tarballs, power outages etc. Real time large datasets were collected, archived and are available for following recent extreme disasters events as use case scenarios.

In the case of the Deepwater Horizon oil spill disaster of 2010 that devastated the Gulf of Mexico, the research demonstrates how social media data can be used as a boundary forcing condition of the oil spill plume forecast model, and results in an order of magnitude forecast improvement. In the case of Hurricane Sandy NY/NJ landfall impact of 2012, owing to inherent uncertainties in the weather forecasts, the NOAA operational surge model only forecasts the worst-case scenario for flooding from any given hurricane. This dissertation demonstrates how the model forecasts, when combined with social media data in a single framework, can be used for near-real time forecast validation, damage assessment and disaster management. Geolocated and time-stamped photos allow near real-time assessment of the surge levels at different locations, which can validate model forecasts give timely views of the actual levels of surge, as well as provide an upper bound regional street level maps beyond which the surge did not spread. In the case of the Tohoku Earthquake and Tsunami of 2011, social media aspects of handheld devices such as Geiger counters that can potentially detect radioactive debris are discussed as well.

Committee: Dr. Milton Halem (chair), Tim Finin, Anupam Joshi, James Smith, Yelena Yesha

Ph.D. student Omar Shehab receives travel grants

dwave_quantum-chip

shehab-front-face

UMBC graduate student Omar Shehab received a travel grant to attend two co-located events, the 14th Canadian Quantum Information Summer School and the 11th Canadian Quantum Information Student Conference. Both events are organized by the Fields Institute and will be held at the University of Guelph.

Omar is a fourth year PhD student in Computer Science working with by Professor Samuel Lomonaco. His Ph.D.research involves determining the quantum computational complexity of topological problems. He is also interested in quantum games, randomness and cryptography. This summer he will be working as a Visiting Research Assistant the USC Information Sciences Institute facility in Arlington, Virginia.

PhD defense: Lushan Han, Schema Free Querying of Semantic Data, 10am Fri 5/23

 Ph.D.Dissertation Defense
Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Schema Free Querying of Semantic Data

Lushan Han

10:00am Friday, 23 May 2014, ITE 325b

Developing interfaces to enable casual, non-expert users to query complex structured data has been the subject of much research over the past forty years. We refer to them as as schema-free query interfaces, since they allow users to freely query data without understanding its schema, knowing how to refer to objects, or mastering the appropriate formal query language. Schema-free query interfaces address fundamental problems in natural language processing, databases and AI to connect users’ conceptual models and machine representations.

However, schema-free query interface systems are faced with three hard problems. First, we still lack a practical interface. Natural Language Interfaces (NLIs) are easy for users but hard for machines. Current NLP techniques are still unreliable in extracting the relational structure from natural language questions. Keyword query interfaces, on the other hand, have limited expressiveness and inherit ambiguity from the natural language terms used as keywords. Second, people express or model the same meaning in many different ways, which can result in the vocabulary and structure mismatches between users’ queries and the machines’ representation. We still rely on ad hoc and labor-intensive approaches to deal with this ‘semantic heterogeneity problem’. Third, the Web has seen increasing amounts of open domain semantic data with heterogeneous or unknown schemas, which challenges traditional NLI systems that require a well-defined schema. Some modern systems gave up the approach of translating the user query into a formal query at the schema level and chose to directly search into the entity network (ABox) for the matchings of the user query. This approach, however, is computationally expensive and has an ad hoc nature.

In this thesis, we develop a novel approach to address the three hard problems. We introduce a new schema-free query interface, SFQ interface, in which users explicitly specify the relational structure of the query as a graphical “skeleton” and annotate it with freely chosen words, phrases and entity names. This circumvents the unreliable step of extracting complete relations from natural language queries.

We describe a framework for interpreting these SFQ queries over open domain semantic data that automatically translates them to formal queries. First, we learn a schema statistically from the entity network and represent as a graph, which we call the schema network. Our mapping algorithms run on the schema network rather than the entity network, enhancing scalability. We define the probability of “observing” a path on the schema network. Following it, we create two statistical association models that will be used to carry out disambiguation. Novel mapping algorithms are developed that exploit semantic similarity measures and association measures to address the structure and vocabulary mismatch problems. Our approach is fully computational and requires no special lexicons, mapping rules, domain-specific syntactic or semantic grammars, thesauri or hard-coded semantics.

We evaluate our approach on two large datasets, DBLP+ and DBpedia. We developed DBLP+ by augmenting the DBLP dataset with additional data from CiteSeerX and ArnetMiner. We created 220 SFQ queries on the DBLP+ dataset. For DBpedia, we had three human subjects (who were unfamiliar with DBpedia) translate 33 natural language questions from the 2011 QALD workshop into SFQ queries. We carried out cross-validation on the 220 DBLP+ queries and cross-domain validation on the 99 DBpedia queries in which the parameters tuned for the DBLP+ queries are applied to the DBpedia queries. The evaluation results on the two datasets show that our system has very good efficacy and efficiency.

Committee: Drs. Li Ding (Memect), Tim Finin (chair), Anupam Joshi, Paul McNamee (JHU), Yelena Yesha

1 10 11 12 13 14 37