talk: Visual Exploration of Big Urban Data, Noon Thr. 3/12, ITE325b, UMBC

Visual Exploration of Big Urban Data

Dr. Huy Yo
Center for Urban Science and Progress, New York University

12:00-1:00pm Thursday, 12 March 2015, ITE 325b

About half of humanity lives in urban environments today and that number will grow to 80% by the middle of this century. Cities are thus the loci of resource consumption, of economic activity, and of innovation; they are the cause of our looming sustainability problems but also where those problems must be solved. Data, along with visualization and analytics can help significantly in finding these solutions.

In this talk, I will discuss the challenges of visual exploration of big urban data; and showcase our approaches in a study of New York City taxi trips. Taxis are valuable sensors and can provide unprecedented insight into many different aspects of city life. But analyzing these data presents many challenges. The data are complex, containing geographical and temporal components in addition to multiple variables associated with each trip. Consequently, it is hard to specify exploratory queries and to perform comparative analyses. This problem is largely due to the size of the data. There are almost a billion records of taxi trips collected in a 5-year period. I will present TaxiVis, a tool that allows domain experts to visually query taxi trips at an interactive speed and performing tasks that were unattainable before. I will also discuss our key contributions in this work: the visual querying model and novel indexing scheme for spatio-temporal datasets.

Dr. Huy Vo is a Research Scientist at the Center for Urban Science and Progress (CUSP), New York University. His research focuses on large-scale data analysis and visualization, big data systems, and scalable displays. He is also a Research Assistant Professor of Computer Science and Engineering at NYU’s Polytechnic School of Engineering since 2011. He is one of the co-creators of VisTrails, an open-source scientific workflow and provenance management system, where he led the design of the VisTrails Provenance SDK. He received his B.S. in Computer Science (2005) and PhD in Computing (2011) from the University of Utah and was a two time recipient of the NVIDIA Fellowship awards (2009-2010 and 2010-2011).

Host: Jian Chen

talk: Physics, Simulation, and Computer Animation, Noon Mon 3/9, ITE325b

Physics, Simulation, and Computer Animation

Professor Adam W. Bargteil
University of Utah

12:00-1:00 Monday, 9 March 2015, ITE325b

Physics-based Computer Animation has revolutionized the world of special effects. I will talk about several success stories including my academy award winning work on fracture, particle skinning, and large-scale splashing liquids. I will also talk about moving beyond cinematic special effects to create tools for artistic authoring of interactive animations and enabling visually predictive simulations that promise to revolutionize industrial design.

Adam W. Bargteil is an assistant professor at the University of Utah. His primary research interests lie in the area of physics-based animation. He earned his Ph.D. in computer science from the University of California, Berkeley and spent two years as a post-doctoral fellow in the School of Computer Science at Carnegie Mellon University. From 2005 to 2007, he was a consultant at PDI/DreamWorks, developing fluid simulation tools that were used in “Shrek the Third” and “Bee Movie.”

CSEE Ph.D. student Kavita Krishnaswamy featured in CNN story

CSEE Ph.D. student Kavita Krishnaswamy was featured in a recent CNN story, Will robots help the bedridden see the world?. She has been getting a lot of visibility in the last few months via a collaboration with Suitable Technologies, a Palo Alto based company that makes ‘telepresence robots’. They loaned the department one of their high-end Beam systems last Fall to use in our robotics related research, lead by professors Tim Oates and Cynthia Matuszek, and have been inviting Kavita to use their systems in various ways.

For example she presented her dissertation proposal in December via Beam, participated in a panel at the Consumer Electronics Show show in January, has been visiting museums via Beam this month, helped lead a debate on the ethics of brain-computer interfaces this past Monday, and will take part in in a SWSX panel in March.

Here is an excerpt from the CNN story:

“A PhD candidate at the University of Maryland, Baltimore County, Krishnaswamy has spinal muscular atrophy and requires assistance 24 hours a day. She was able to make the museum trips using a Beam telepresence robot — a remotely controlled 16-inch screen mounted five feet above motorized wheels.

“I really enjoy the autonomy. It allows me to focus in on the things I want to see,” said Krishnaswamy. “And it’s not controlled by somebody else. I really like being independent.”

Since her first experience using a Beam to attend a computing conference in Seattle, Krishnaswamy says her life has changed drastically. She has more confidence and her calendar is suddenly filled. In a single day this week, she will take part in a debate on her campus, drop in on the Mobile World Congress in Barcelona, and be in Washington D.C. for the Human Robot Interaction conference.

Krishnaswamy has never had the ability to stand. The Beam puts her at eye-level and gives her a new perspective on the world, she says.”

talk: Topic Modeling with Structured Priors for Text-Driven Science

mp

Topic Modeling with Structured Priors for Text-Driven Science

Michael Paul, JHU

12:00pm – 1:00pm, Monday, 2 March 2015, ITE 325

Many scientific disciplines are being revolutionized by the explosion of public data on the web and social media, particularly in health and social sciences. For instance, by analyzing social media messages, we can instantly measure public opinion, understand population behaviors, and monitor events such as disease outbreaks and natural disasters. Taking advantage of these data sources requires tools that can make sense of massive amounts of unstructured and unlabeled text. Topic models, statistical models that describe low-dimensional representations of data, can uncover interesting latent structure in large text datasets and are popular tools for automatically identifying prominent themes in text. However, to be useful in scientific analyses, topic models must learn interpretable patterns that accurately correspond to real-world concepts of interest.

In this talk, I will introduce Sprite, a family of topic models that can encode additional structures such as hierarchies, factorizations, and correlations, and can incorporate supervision and domain knowledge. Sprite extends standard topic models by formulating the Bayesian priors over parameters as functions of underlying components, which can be constrained in various ways to induce different structures. This creates a unifying representation that generalizes several existing topic models, while creating a powerful framework for building new models. I will describe a few specific instantiations of Sprite and show how these models can be used in various scientific applications, including extracting self-reported information about drugs from web forums, analyzing healthcare quality in online reviews, and summarizing public opinion in social media on issues such as gun control.

Michael Paul is a PhD candidate in Computer Science at Johns Hopkins University. He earned an M.S.E. in CS from Johns Hopkins University in 2012 and a B.S. in CS from the University of Illinois at Urbana-Champaign in 2009. He has received PhD fellowships from Microsoft Research, the National Science Foundation, and the Johns Hopkins University Whiting School of Engineering. His research focuses on exploratory machine learning and natural language processing for the web and social media, with applications to computational epidemiology and public health informatics.

— more information and directions: http://bit.ly/UMBCtalks

Two technical talks by Amazon senior staff, 4-6:30pm Tue 3/3

Senior Amazon staff members will give two technical talks on next week on Tuesday, March 3, in the UC Ballroom on topics of great practical interest and utility.

  • Lydia Fitzpatrick, Senior Technical Program Manager for Amazon Mobile Business will give a talk on “Web Performance Optimization” from 4:00pm to 5:00pm.
  • Leo Zhadanovsky, Senior Solutions Architect for Amazon Web Services will present an “Introduction to Amazon Web Services (AWS)” from 5:30pm to 6:30pm. The talk with introduce cloud computing and  discuss the various Networking, Compute, Database, Storage, Application, Deployment and Management services that AWS offers. It will demonstrate how to launch a full three tier LAMP stack in minutes, as well as how to setup a simple web server on AWS.  The presentation will also discuss several use-cases, demonstrating how customers such as Enterprises, Startups, and Government Agencies are using AWS to power their computing needs.

The talks will be preceded and followed by an open networking opportunity with Amazon Human Resource representatives. Amazon is interested in students for internships and full-time position who are majoring in Information Systems, Business Technology Administration, Computer Engineering, Computer Science, and Cybersecurity.

Debate: Ethics of brain-computer interface technology

What ethical problems might advances in brain-computer interface technology create?

That’s the question that will be debated Monday evening as part of the UMBC BioCOM Ethical Debates (B-Ethical) series co-sponsored by the Biology Council of Majors and Philosophers Anonymous.

The event will take place from 7:30pm to 9:00pm on Monday, March 2 in room 104 of the ITE building (lecture hall 7) at UMBC.

One team will be lead by Professor Richard Wilson, a member of UMBC’s Philosophy department with a focus on applied ethics. LThe other team is headed by Kavita Krishnaswamy, a Ph.D. student from UMBC’s Computer Science and Electrical Engineering department whose dissertation research is exploring how robotics can help increase autonomy in daily living for people with disabilities. Kavita, who has a severe physical disability herself, will participate via a Beam Smart Presence robot. Also on the team is CSEE Professor Tinoosh Mohsenin whose research includes deep learning to interpret high-resolution multichannel electroencephalography data.

Some details from the B-Ethical post:

“The field of Brain Computer Interface (BCI) research has led to the engineering of a device system that allows you to convert your thoughts into action by using your brain’s neural activity: think controlling a robotic arm via electrodes that are placed on a brain that controls the arm with its thoughts.This revolutionary field in neuroscience has given hope to those who are severely disabled including but not limited to those who suffer from blindness, paralysis, and other debilitative physical disabilities. Hence, computer-brain interface technology has the potential and power to do incredible good.

Some note, however, the importance of recognizing the possibility for ethical wrongdoing. One such ethical question surrounding this field is the possibility for social stratification as a result of barriers such as cost. If brain enhancement does become effective and ubiquitous, there is the possibility that pressure to enhance one’s brain in order to keep up with rising competition, might leave some unable to access this enhancement because of financial barriers. Hence, this will not only widen the gap in society between the rich and the poor, but become dangerous, creating a social strata in which an intellectual elite armed with thought-controlled weapons would government the people. One could think of an army with capabilities such as night vision eyes, fingers that can fire bullets, humans made immortal by copying their genetic material into more resilient hardware, these endless possibilities ascend into the world of sci-fi as they are scary.”

PhD proposal: User Identification in Wireless Networks

Ph.D. Dissertation Proposal

User Identification in Wireless Networks

Christopher Swartz

9:00-11:00pm Friday, 27 February 2015, ITE 325B

Wireless communication using the 802.11 specifications is almost ubiquitous in daily life through an increasing variety of platforms. Traditional identification and authentication mechanisms employed for wireless communication commonly mimic physically connected devices and do not account for the broadcast nature of the medium. Both stationary and mobile devices that users interact with are regularly authenticated using a passphrase, pre-shared key, or an authentication server. Current research requires unfettered access to the user’s platform or information that is not normally volunteered.

We propose a mechanism to verify and validate the identity of 802.11 device users by applying machine learning algorithms. Existing work substantiates the application of machine learning for device identification using Commercial Off-The-Shelf (COTS) hardware and algorithms. This research seeks the refinement of and investigation of features relevant to identifying users. The approach is segmented into three main areas: a data ingest platform, processing, and classification.

Initial research proved that we can properly classify target devices with high precision, recall, and ROC using a sufficiently large real-world data set and a limited set of features. The primary contribution of this work is exploring the development of user identification through data observation. A combination of identifying new features, creating an online system, and limiting user interaction is the objective. We will create a prototype system and test the effectiveness and accuracy of it’s ability to properly identify users.

Committee: Drs. Joshi (Chair/Advisor), Nicholas, Younis, Finin, Pearce, Banerjee

talk: Visual understanding of human actions, 12-1 Fri 2/27, ITE325b

Visual understanding of human actions

Dr. Hamed Pirsiavash

Postdoctoral Research Associate
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology

12:00-1:00pm Friday, 27 February, 2015, ITE 325B

The aim in computer vision is to develop algorithms for computers to “see” the world as humans do. Central to this goal is understanding human behavior as an intelligent agent functioning in the visual world. For instance, in order for a robot to interact with us, it should understand our actions to produce the proper response. My work explores several directions towards computationally representing and understanding human actions.

In this talk, I will focus on detecting actions and judging their quality. First, I will describe simple grammars for modeling long-scale temporal structure in human actions. Real-world videos are typically composed of multiple action instances, where each instance is itself composed of sub-actions with variable durations and orderings. Our grammar models capture such hierarchical structure while admitting efficient, linear-time parsing algorithms for action detection. The second part of the talk will describe our algorithms for going beyond detecting actions to judging how well they are performed. Our learning-based framework provides feedback to the performer to improve the quality of his/her actions.

Host: Mohamed Younis

PhD proposal: Scalable Storage System for Big Scientific Data

Ph.D. Dissertation Proposal

MLVFS: A Scalable Storage System For Managing Big Scientific Data

Navid Golpayegani

3:00-5:00pm Tuesday 24 February 2015, ITE 346

Managing peta or exabytes of data with hundreds of millions to billions of files is a necessary first step towards an effective big data computing and collaboration environment for distributed systems. Current file system designs have focused on providing better and faster data distribution. Managing the directory structure for data discovery becomes an essential element of the scalability problems for big data systems. Recent designs are addressing the challenge of exponential growth of files. Still largely unexplored is the research for dealing with the organizational aspect of managing big data systems with hundreds of millions of files. Most file systems organize data into static directory structures making data discovery, when dealing with large data sets, hard and slow.

This thesis will propose a unique Multiview Lightweight Virtual File System (MLVFS) design to primarily deal with the data organizational management problem in big data file systems. MLVFS is capable of the dynamic generation of directory structures to create multiple views of the same data set. With multiple views, the storage system is capable of organizing available data sets by differing criteria such as location or date without the need to replicate data or use symbolic links. In ad- dition, MLVFS addresses scalability issues associated with the growth of the stored files by removing the internal metadata system and replacing it with generally avail- able external metadata information (i.e. data base servers, project compute servers, remote repositories, etc.). This thesis, moreover, proposes to add, plug in capabilities not normally found in file systems that make this system highly flexible, in terms of specifying sources of meta data information, dynamic file format streaming and other file handling features.

The performance of MLVFS will be tested in both simulated environments as well as real world environments. MLVFS will be installed on the BlueWave cluster at UMBC for simulated load testing to measure the performance for various loads. Simultaneously, stable version of MLVFS will run in real world production environ- ments such as those of the NASA MODIS instrument processing system (MODAPS). The MODAPS system will be used to show examples of real world use cases for MLVFS. Additionally, there will be other systems explored for the real world use of MLVFS, such as at NIST for research into Biomedical Image Stitching.

Committee: Drs. Milton Halem (Chair, Advisor), Yelena Yesha, Charles Nicholas, John Dorband, Daniel Duffy

talk: Understanding Social Spammers, Noon Tue 2/24, ITE325

Understanding Social Spammers: A Data Mining Perspective
Xia “Ben” Hu

Computer Science and Engineering
Arizona State University

12:00-1:00 Tuesday, 24 February 2015

With the growing popularity of social media, social spamming has become rampant on all platforms. Many (fake) accounts, known as social spammers, are employed to overwhelm legitimate users with unwanted information. Social spammers are unique due to their coordinated efforts to launch attacks such as distributing ads to generate sales, disseminating pornography and viruses, executing phishing attacks, or simply sabotaging a system’s reputation. In this talk, I will introduce a novel and systematic analysis of social spammers from a data mining perspective to tackle the challenges raised by social media data for spammer detection. Specifically, I will formally define the problem of social spammer detection and discuss the unique properties of social media data that make this problem challenging. By analyzing the two most important types of information, network and content information, I will introduce a unified framework by collectively using heterogeneous information in social media. To tackle the labeling bottleneck in social media, I will show how we can take advantage of the existing information about spam in email, SMS, and on the web for spammer detection in microblogging. I will also present a solution for efficient online processing to handle fast-evolving social spammers.

Xia Hu is a Ph.D. candidate in Computer Science and Engineering at Arizona State University, supervised by Professor Huan Liu. His research interests include data mining, machine learning, social network analysis, etc. As a result of his research work, he has published nearly 40 papers in several major academic venues, including WWW, SIGIR, KDD, WSDM, IJCAI, AAAI, CIKM, SDM, etc. One of his papers was selected for the Best Paper Shortlist in WSDM’13. He is the recipient of IEEE “Atluri Award” Scholarship, 2014 ASU’s President’s Award for Innovation, and Faculty Emeriti Fellowship. He has served on program committees for several major conferences such as WWW, IJCAI, SDM and ICWSM, and reviewed for multiple journals, including IEEE TKDE, ACM TOIS and Neurocomputing. His research attracts wide range of external government and industry sponsors, including NSF, ONR, AFOSR, Yahoo!, and Microsoft.

— more information and directions: http://bit.ly/UMBCtalks

1 54 55 56 57 58 142