talk: Visualizing (Scientific) Simulations, 12pm 4/4, UMBC

Visualizing (Scientific) Simulations with
Geometric and Topological Features

Prof. Joshua A. Levine, Clemson University
12:00pm Monday, 4 April 2016, ITE 325b, UMBC

Today’s HPC resources are an essential component for enabling new scientific discoveries. Specifically, scientists in all fields leverage HPC to do computational simulations that complement laboratory experimentation. These simulations generate truly massive data; visualization offers a mechanism to help understand the simulated phenomena this data describes.

This talk will present two recent research projects, both of which highlight new techniques for visualization based on characterizing and computing features of interest. The first project describes an algorithm for surface extraction from particle data. This data is commonly used in simulations for phenomena at small (molecular dynamics), medium (fluid flow, fracture), and large (astrophysics) length scales. Surface geometry allows standard computer graphics approaches to be used to visualize complex behaviors. The second project introduces a new data structure for representing vector field data commonly found in computational fluid dynamics and climate modeling. This data structure enables robust extraction of topological features that provide summary visualizations of vector fields. Both projects exemplify my vision for how collaborative efforts between experts in scientific and computational fields are necessary to make the best use of our HPC systems.

Joshua A. Levine is an assistant professor in the Visual Computing division of the School of Computing at Clemson University. He received his PhD from The Ohio State University after completing his BS and MS in Computer Science from Case Western Reserve University. His research interests include visualization, geometric modeling, topological analysis, mesh generation, vector fields, volume and medical imaging, computer graphics, and computational topology.

Host: Prof. Adam Bargteil ()

talk: Reverse Engineering of Dynamic Regulatory Networks from Morphological Data, 11am 4/7

Reverse Engineering of Dynamic Regulatory Networks
from Morphological Experimental Data

Prof. Daniel Lobo, Biological Sciences, UMBC
3:00pm 11:00am April 7 6th,  ITE Building, Room 325b

Many crucial experiments in developmental, regenerative, and cancer biology are based on manipulations and perturbations resulting in morphological outcomes. For example, planarian worms can regenerate a complete organism from almost any amputated piece, but knocking down certain genes can result in the regeneration of double-head phenotypes. However, the inherent complexity and non-linearity of biological regulatory networks prevent us from manually discerning testable comprehensive models from patterning and morphological results, and existent bioinformatics tools are generally limited to genomic or time-series concentration data. As a consequence, despite a huge experimental dataset in the literature, we still lack mechanistic explanations that can account for more than one or two morphological results in many model organisms. To bridge this gulf separating morphological data from an understanding of pattern and form regulation, we developed a computational methodology to automate the discovery of dynamic genetic networks directly from formalized phenotypic experimental data. In this seminar, I will present novel formal ontologies and databases of surgical, genetic, and pharmacological experiments with their resultant morphological phenotypes, together with artificial intelligence tools based on evolutionary computation and in silico simulators that can directly mine these data to reverse-engineer mechanistic dynamic genetic models. We demonstrated this approach by automatically discovering the first comprehensive model of planarian regeneration, which not only explains at once all the key experiments available in the literature (including surgical amputations, knock-down of specific genes, and pharmacological treatments), but also predicts testable novel pathways and genes. This approach is readily paving the way for understanding the regulation (and dis-regulation) of complex patterns and shapes in developmental, regenerative, and cancer biology.

Daniel Lobo is an Assistant Professor at the University of Maryland, Baltimore County. His research aims to understand, control, and design the dynamic regulatory mechanisms governing complex biological processes. To this end, his group develops new computational methods, ontologies, and high-performance in silico experiments to automate the reverse-engineering of quantitative models from biological data and the design of regulatory networks for specific functions. They seek to discover the mechanisms of development and regeneration, find therapies for cancer and other diseases, and streamline the application of synthetic biology. His work has received widespread media coverage including Wired, TechRadar, and Popular Mechanics.

CSEE faculty cited in news for technology transfer research

The Baltimore Sun and Washington post ran stories on efforts by the University System of Maryland to help faculty and students turn their research into growing businesses. The stories cite work being from at UMBC by CSEE faculty Nilanjan Banerjee, Ryan Robucci and Chintan Patel and their research students.

“UMBC professors Nilanjan Banerjee and Ryan Robucci relied heavily on the university to help them commercialize two types of wearable sensors they invented. One of the sensors, called RestEaze, measures leg movements during sleep, which may help identify if people have restless leg syndrome, ADHD or even cardiac problems. The other, called Inviz, is for people with limited mobility, who can use the sensor to call 911 or turn on a TV.

University and state officials helped walk the professors through the process of obtaining grants, patenting the technology and creating a business. Banerjee said the university is increasingly stressing that its researchers focus on commercializing their discoveries.

‘UMBC also definitely wants to place itself as a university that is helping the community around it as well as the state,’ Banerjee said. ‘It’s one of our responsibilities to make sure we have societal impact, and commercializing is one way of doing it.’

Resteaze is a wearable system that can non-intrusively monitor sleep quality in a home setting. It is being developed in conjunction with the Johns Hopkins University School of Medicine with funding from the TEDCO Innovation Commercialization Program, which provides funding to support the commercialization of qualified university technologies. Inviz is a low-cost gesture recognition system that uses flexible textile-based capacitive sensors that can be embedded in fabric.  It is being developed with funding from the National Science Foundation and a gift from Microsoft.  Both projects are collaborative efforts between of faculty and students from UMBC’s computer science and computer engineering programs.

talk: Probabilistic Modeling of Socio-Behavioral Interactions, 12p 3/24

A Probabilistic Approach to Modeling Socio-Behavioral Interactions

Arti Ramesh, University of Maryland, College Park

Noon Thursday, 24 March 2016, ITE325b

The vast growth and reach of the Internet and social media have led to a tremendous increase in socio-behavioral interaction content on the web. The ever-increasing number of online interactions has led to a growing interest to understand and interpret online communications to enhance user experience. My work focuses on building scalable computational methods and models for representing and reasoning about rich, heterogeneous, interlinked socio-behavioral data. In this talk, I focus on one such emerging online interaction platform—online courses (MOOCs). I develop a family of probabilistic models to represent and reason about complex socio-behavioral interactions in the following real-world problems: 1) modeling student engagement, 2) predicting student completion and dropouts, 3) modeling student sentiment in discussion forums toward various course aspects (e.g., academic content vs. logistics) and its effect on their course completion, and 4) designing an automatic system to predict fine-grained topics and sentiment in online course discussion forums. I demonstrate the efficacy of these models via extensive experimentation on data from twelve Coursera courses. These methods have the potential to improve learning and teaching experience of online education participants and focus limited instructor resources to increase student retention.

Arti Ramesh is a PhD candidate at University of Maryland, College Park. Her primary research interests are in the field of machine learning and data science, particularly on probabilistic graphical models. Her advisor is Prof. Lise Getoor. Her research focuses on building scalable models for reasoning about interconnectedness, structure, and heterogeneity in socio-behavioral networks. She has published papers in peer-reviewed conferences such as AAAI and ACL. She has served on the TPC for ACL workshop on Building Educational Applications and has served as a reviewer for notable conferences and journals such as NIPS, Social Networks and Mining, and Computer Networks. She has won multiple awards during her graduate study including the outstanding graduate student Dean’s fellowship 2016, Dean’s graduate fellowship (2012-2014), and yahoo scholarship for grace hopper. She has worked at IBM research and LinkedIn during her graduate study. She received her Masters in Computer Science from University of Massachusetts, Amherst.

talk: Adversarial Machine Learning in Relational Domains, 12pm 3/22

Adversarial Machine Learning in Relational Domains

Prof. Daniel Lowd, University of Oregon

12:00-1:00 Tuesday, 22 March 2016, ITE 325b, UMBC

Many real-world domains, such as web spam, auction fraud, and counter-terrorism, are both adversarial and relational. In adversarial domains, a model that performs well on training data may do poorly in practice as adversaries modify their behavior to avoid detection. Previous work in adversarial machine learning has assumed that instances are independent from each other, both when manipulated by an adversary and labeled by a classifier. Relational domains violate this assumption, since object labels depend on the labels of related objects as well as their own attributes.

In this talk, I will present two different methods for learning relational classifiers that are robust to adversarial noise. Our first approach assumes that related objects have correlated labels and that the adversary can modify a certain fraction of the attributes. In this case, we can incorporate the adversary’s worst-case manipulation directly into the learning problem and find optimal weights in polynomial time. Our second method generalizes to any relational learning problem where the perturbations in feature space are bounded by an ellipse or polyhedron. In this case, we show that adversarial robustness can be achieved by a simple regularization term or linear transformation of the feature space. These results form a promising foundation for building robust relational models for adversarial domains.

 

Daniel Lowd is an Assistant Professor in the Department of Computer and Information Science at the University of Oregon. His research interests include learning and inference with probabilistic graphical models, adversarial machine learning, and statistical relational machine learning. He received his Ph.D. in 2010 from the University of Washington. He has received a Google Faculty Award, an ARO Young Investigator Award, and the best paper award at DEXA 2015.

Host: Cynthia Matuszek,

talk: Rethinking the Cloud for Next-generation Applications, 3/21

Rethinking the Cloud for Next-generation Applications

Tian Guo, University of Massachusetts
11:00am Monday, 21 March 2016, ITE325b

Today’s cloud platforms serve an increasing number of requests from millions of mobile users. This mobile workload introduces new challenges and workload dynamics that differ from traditional workloads. In the future, billions of Internet-of-Things (IoT) devices will connect to cloud platforms and compete for cloud resources. Current cloud platforms are agnostic to the type of end-devices and are not well suited to emerging application needs. My work argues that these trends require a rethinking of current cloud platforms and focuses on the challenges of handling the dynamics introduced by these next-generation applications.

In this talk, I will describe two aspects of cloud design: handling demand-side dynamics from emerging cloud workloads and handling supply-side dynamics from varying cloud platform resources. Specifically, I will describe model-driven mechanisms to optimize user-perceived performance for global workloads that exhibit spatial variations, and mechanisms to effectively support running applications on transient servers—servers with unpredictable availability. Finally, I will conclude my talk with future work in cloud research to handle emerging mobile and IoT applications.

Tian Guo is a Ph.D. student in the College of Information and Computer Sciences at University of Massachusetts Amherst. Her research interests include distributed systems, cloud computing, mobile computing and cloud-enabled IoTs. Her current focus is on handling dynamics introduced by new cloud workloads and emerging cloud platforms. She received her B.E. in Software Engineering from Nanjing University, China in 2010 and her M.S. in Computer Science from University of Massachusetts Amherst in 2013.

talk: Integrated Circuit Security and Trustworthiness, 11am 3/23

Integrated Circuit Security and Trustworthiness

Dr. Hassan Salmani, Howard University

11:00am Wednesday, 23 March 2016, ITE 325b, UMBC

Integrated Circuits are at the core of any modern computing system such as military systems and smart electric power grids, and their security and trustworthiness ground the security of entire system. Notwithstanding the central impact of ICs security and trustworthiness, the horizontal IC supply chain has become prevalent due to confluence of increasingly complex supply chains and cost pressures.

In this presentation, Professor Salmani will present an overview on some of his contribution into hardware security and trust including vulnerability of digital circuits to malicious modification called hardware Trojans at different levels, design methodologies and techniques to facilitate hardware Trojan detection, and design methodologies to prevent design counterfeiting. In a detailed discussion, Professor Salmani will focus on the vulnerability of ICs to hardware Trojan insertion at the layout level.

Professor Hassan Salmani received the Ph.D. degree from the University of Connecticut, in 2011. He is currently an Assistant Professor with Howard University. While his current research sponsored by Defense Advanced Research Projects Agency and Howard University, he has published tens of journal articles and refereed conference papers and has given several invited talks. He has published one book and one book chapter. His current research projects include hardware security and trust and supply chain security. He is a member of the SAE International’s G-19A Tampered Subgroup, ACM, and ACM SIGDA. He serves as a Program Committee and Session Chair of the Design Automation Conference, Hardware-Oriented Security and Trust, the International Conference on Computer Design, and VLSI Design and Test.

talk: Improving Password Security and Usability with Data-Driven Approaches, 3/11

Improving Password Security and Usability with Data-Driven Approaches

Blase Ur, CMU

12:30pm Friday, 11 March 2016, ITE325b

Users often must make security and privacy decisions, yet are rarely equipped to do so. In my research, I aim to understand both computer systems and the humans who use them. Armed with this understanding, I design and build tools that help users protect their security and privacy.

In this talk, I will describe how I applied this research approach to password security and usability. As understanding what makes a password good or bad is crucial to this process, I will first discuss our work on metrics for password strength. These metrics commonly involve modeling password cracking, which we found often vastly underestimates passwords’ vulnerability to cracking in the real world. We instead propose combining a series of carefully configured approaches, which we found to conservatively model real-world experts. We used these insights to implement a Password Guessability Service, which is already used by nearly two dozen research groups. I will then discuss our work on another key step to helping users create better passwords: understanding why humans create the passwords they do. I will focus on the impact of password-strength meters and users’ perceptions of password security. By combining better metrics with an understanding of users, I show how we can design tools that guide users toward better passwords.

Blase Ur is a Ph.D. candidate at Carnegie Mellon University’s School of Computer Science, where he is advised by Lorrie Cranor. His research interests lie at the intersection of security, privacy, and human-computer interaction (HCI). In addition to his work on password security, he has studied numerous aspects of online privacy and the Internet of Things (IoT). Previously, he obtained his A.B. in Computer Science from Harvard University. He is the recipient of an NDSEG fellowship, a Fulbright scholarship, a Yahoo Key Scientific Challenges Award, the best paper award at UbiComp 2014, and honorable mentions for best paper at both CHI 2012 and CHI 2016.

talk: To Measure or not to Measure Terabyte-Sized Images? 3pm 3/9

CHMPR Seminar

To Measure or not to Measure Terabyte-Sized Images?

Peter Bajcsy, PhD
Information Technology Laboratory
National Institute for Standards and Technology

3:00pm Wednesday, 9 March 2016, ITE325b, UMBC

This talk will elaborate on a basic question “To Measure or Not To Measure Terabyte-Sized Images?” posed by William Shakespeare if he were a bench scientist at NIST. This basic question is a dilemma for many traditional scientists that operate imaging instruments capable of acquiring very large quantities of images. However, manual analyses of terabyte-sized images and insufficient software and computational hardware resources prevent scientists from making new discoveries, increasing statistical confidence of data-driven conclusions, and improving reproducibility of reported results.

The motivation for our work comes from experimental systems for imaging and analyzing human pluripotent stem cell cultures at the spatial and temporal coverages that lead to terabyte-sized image data. The objective of such an unprecedented cell study is to characterize specimens at high statistical significance in order to guide a repeatable growth of high quality stem cell colonies. To pursue this objective, multiple computer and computational science problems have to be overcome including image correction (flat-field, dark current and background), stitching, segmentation, tracking, re-projection, feature extraction, data-driven modeling and then representation of large images for interactive visualization and measurements in a web browser.

I will outline and demonstrate web-based solutions deployed at NIST that have enabled new insights in cell biology using TB-sized images. Interactive access to about 3TB of image and image feature data is available at https://isg.nist.gov/deepzoomweb/.

Peter Bajcsy received his Ph.D. in Electrical and Computer Engineering in 1997 from the University of Illinois at Urbana-Champaign and a M.S. in Electrical and Computer Engineering in 1994 from the University of Pennsylvania. He worked for machine vision, government contracting, and research and educational institutions before joining the National Institute of Standards and Technology in 2011. At NIST, he has been leading a project focusing on the application of computational science in biological metrology, and specifically stem cell characterization at very large scales. Peter’s area of research is large-scale image-based analyses and syntheses using mathematical, statistical and computational models while leveraging computer science fields such as image processing, machine learning, computer vision, and pattern recognition. He has co-authored more than more than 27 journal papers and eight books or book chapters, and close to 100 conference papers.

UMBC Grand Challenges Scholars Program, apply by 3/25

students_computers

The UMBC Grand Challenges Scholars Program engages students from all majors who want to help solve important problems facing society. It is organized around a fourteen Grand Challenges identified by the National Academy of Engineering with a focus on sustainability, health, security and knowledge. Their solutions will require interdisciplinary teams and years of sustained effort. The national program combines curricular and extra-curricular program with five components that are designed to prepare students to be the generation that solves the grand challenges facing society in this century.

A UMBC Grand Challenge Scholar will design a personalized program to explore a selected Grand Challenge. The program areas include research, interdisciplinary study, entrepreneurship, global perspectives, and service. UMBC Grand Challenge Scholars will receive formal designation at graduation for their accomplishments. The program is designed for students completing their sophomore year, but all students may apply. Get more information  here and apply online to become a UMBC Grand Challenge Scholar by March 25.

Find out more about the UMBC Grand Challenges Scholars Program from Prof. Marie desJardins this Tuesday, March 8, from 12-1pm (pizza provided!) or Thursday, March 10, from 4-5pm (snacks provided!), in ITE 325b. 

1 37 38 39 40 41 142