talk: The strange world of quantum computing, 2:30 Tue 11/5, ITE 325

from wikipedia

Computer Science and Electrical Engineering
Quantum Computing Seminar

The Strange World of Quantum Computing

Samuel Lomonaco, CSEE, UMBC

2:30-3:00 Tuesday, 5 November 2013, ITE 325b

This talk will give an introductory overview of quantum computing in an intuitive and conceptual fashion. No prior knowledge of quantum mechanics will be assumed. This is the first of a series of talks based on the four invited lectures given at Oxford. PowerPoint slides can be found online here.

Samuel J. Lomonaco is a professor at the Department of Computer Science and Electrical Engineering of the University of Maryland Baltimore County. He is internationally known for his many contributions in mathematics and in computer science. His research interests span a wide range of subjects from knot theory, algebraic and differential topology to algebraic coding theory, quantum computation, and symbolic computation. In quantum cryptography, he has shown how quantum information theory can be used to gain a better understanding of eavesdropping with quantum entanglement. In quantum computation, he has shown how Lie groups can be used to solve problems arising in the study of quantum entanglement. In 2000 Professor Lomonoco organized the first American Mathematical Society short course on quantum computation.

Organizer: Prof. Samuel Lomonaco,

talk: Manish Gupta on Deriving Insights from Data, 3pm Mon 11/4

from http://upload.wikimedia.org/wikipedia/commons/6/60/Graph_betweenness.svg

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Deriving Insights from Data:
Peek at Research Challenges for some Industry Verticals

Dr. Manish Gupta
Vice President and Director
Xerox Research Center India

3:00pm Monday, 4 November, 2013, ITE 325b, UMBC

We are entering an era that will usher dramatic changes in several industries via exploitation of data. With the proliferation of data from sensors that are becoming ubiquitous and increasing digitization of data that used to be in non-electronic form, there are opportunities to completely transform the way the world is run. We present examples of such opportunities in the Financial, Healthcare, Education and Infrastructure domains. We also describe unique challenges like scale and heterogeneity in growth markets like India, which often require different approaches to solving these problems. Diving deeper into the healthcare industry, we present preliminary work that shows the applicability of remote sensing and data analytics not only to measure body vitals such as temperature and heart rate, but also to diagnose diseases such as breast cancer and atrial fibrillation (a form of cardiac arrhythmia) in the future. As more of the patients’ medical history gets captured in electronic health record systems, there is a further promise of applying real-time predictive analytics (based on accurate models for diagnosis and treatment of various diseases, utilizing the latest medical literature) to assist doctors in practicing personalized, evidence-based medicine. We describe outstanding challenges, including data privacy, machine learning over heterogeneous forms of data, and financial incentives design, which we believe must be addressed to enable transformational impact.

Dr. Manish Gupta is Vice President at Xerox Corporation and Director of Xerox Research Centre in India. Previously, Manish has served as Managing Director, Technology Division at Goldman Sachs in India, and has held various leadership positions with IBM, including that of Director, IBM Research – India and Chief Technologist, IBM India/South Asia. From 2001 to 2006, he served as a Senior Manager at the IBM T.J. Watson Research Center in Yorktown Heights, New York, where he led the team developing system software for the Blue Gene/L supercomputer. IBM was awarded a National Medal of Technology and Innovation for the invention of Blue Gene by US President Barack Obama in 2009. Manish earned a B.Tech. in Computer Science from IIT Delhi in 1987, an M.S. from the Ohio State University in 1988 and a Ph.D. from the University of Illinois at Urbana Champaign in 1992. He has co-authored over 75 papers, with more than 5,000 citations in Google Scholar (with an h-index of 41) in the areas of high-performance computing, compilers, and virtual machine optimizations, and has been granted more than 15 US patents. While at IBM, Manish received an Outstanding Innovation Award, two Outstanding Technical Achievement Awards and the Lou Gerstner Team Award for Client Excellence. Manish is an ACM Fellow.

talk: Acoustic-Tactile Rendering of Visual Information for the Visually Impaired, 2:30 Mon 11/11, ITE325b

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Acoustic-Tactile Rendering of Visual
Information for the Visually Impaired

Thrasyvoulos N. Pappas

Electrical Engineering and Computer Science
Northwestern University

2:30pm Monday, 11 November 2013, ITE 325B, UMBC

After a brief overview of research in the Department of Electrical Engineering and Computer Science at Northwestern University, we will focus on one particular research problem, the use of hearing and touch for conveying graphical and pictorial information to visually impaired (VI) people. This problem combines visual, acoustic, and tactile signal analysis with and understanding of human perception and interface design. The main idea is that the user actively explores a two-dimensional layout consisting of one or more objects with the finger on a touch screen. The objects are displayed via sounds and raised-dot tactile patterns. The finger acts as a pointing device and provides kinesthetic feedback. The touch screen is partitioned into regions, each representing an element of a visual scene or graphical display. A key element of our research is the use of spatial sound to facilitate active exploration of the layout. We use the head-related transfer function (HRTF) for rendering sound directionality and variations of sound intensity and tempo for rendering proximity. Our research has addressed object shape and size perception, as well as the of a 2-D layout of simple objects with identical size and shape. We have also considered the rendering of a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a “virtual cane.” Subjective experiments with visually-blocked subjects demonstrate the effectiveness of the proposed approaches. Our research findings are also expected to have an impact in other applications where vision cannot be used, e.g., for GPS navigation while driving, fire-fighter operations in thick smoke, and military missions conducted under the cover of darkness.

Thrasos Pappas received the Ph.D. degree in electrical engineering and computer science from MIT in 1987. From 1987 until 1999, he was a Member of the Technical Staff at Bell Laboratories, Murray Hill, NJ. He joined the EECS Department at Northwestern in 1999. His research interests are in human perception and electronic media, and in particular, image and video quality and compression, image and video analysis, content-based retrieval, model-based halftoning, and tactile and multimodal interfaces. Prof. Pappas is a Fellow of the IEEE and SPIE. He has served as editor-in-chief of the IEEE Transactions on Image Processing (2010-12), elected member of the Board of Governors of the Signal Processing Society of IEEE (2004-07), chair of the IEEE Image and Multidimensional Signal Processing Technical Committee (2002-03), and technical program co-chair of ICIP-01 and ICIP-09. Since 1997 he has been co-chair of the SPIE/IS&T Conference on Human Vision and Electronic Imaging.

Host: Janet C. Rutledge, Ph.D.

PhD defense: Visualizing Sequential Patterns in Large Datasets, 11/1

PhD Defense

Visualizing Sequential Patterns in Large

Datasets Using Levels of Abstraction

Dana Wortman

11am – 2pm, Friday, 1 November 2013, ITE 325b

Student retention and success are important topics in all academic fields and institutions. Faculty seek to understand which topics, theories, or skills defeat students or require strengthening to promote success. Programs seek to understand how to better sequence courses to ensure students are prepared for requisite future courses. Institutions seek to understand how to intervene to promote retention and improve graduation rates. Unfortunately, most statistics gathered by Institutional Research efforts are limited to failure rates, enrollment rates, and graduation rates and do not often explore individual student performance. While these are often further analyzed by various student demographic attributes such as race and gender, these statistical methods alone are insufficient to understand student performance over time and sequential patterns of enrollment or success and failure. This research presents a method using multiple levels of abstraction to visualize performance patterns over time.

To visualize student enrollment and performance patterns, several issues must be addressed including sequential versus concurrent enrollment, spatial layout of course events, and performance over time. Another challenge addressed by this work is that of presenting sequences within the context of the entire program. To address these issues, multiple abstractions are combined in a multi-layered visualization that presents a high-level overview of students enrollment and performance patterns while retaining detailed information regarding individual student progress and performance as they advance through their courses.

The aggregated view represents the lowest level of abstraction, student enrollment and performance are aggregated into a graph structure, presenting patterns of movement throughout the program at the individual course level. The clustered view represents mined sequential patterns of enrollment and performance, illustrating common sequences. The directed view represents the highest level of abstraction and uses two visual elements, heat maps and a vector field, to illustrate overall performance in individual events and movement through the program. Results from multiple cohorts can then be superimposed on the same visualization to enable easy comparisons between patterns. Together, these abstractions provide a focus+context view of student performance, retaining outliers and emphasizing common patterns to illuminate dominant and unique patterns between cohorts of students.

This approach can help educators better understand student progress through the program, performance in individual courses, or student-selected course sequencing and this information can be used to address deficiencies in preparation, skills, or prerequisites. To demonstrate the appropriateness of this approach, performance and enrollment patterns are explored in the Computer Science program at the University of Maryland, Baltimore County. Specifically, this work examines the Gateway policy that requires students to earn a B or higher in the first two required programming courses before progressing with the hopes of validating the existing Gateway but also exploring other possible Gateway courses. Other issues explored within the Computer Science program include race, gender, math placement, and high school scores with the goal of attracting and retaining a more diverse group of students.

Committee: Penny Rheingans (chair), Marie desJardins, Marc Olano, Tim Finin and Diane Lee

Talk: Zatyko on Cloud Forensics, Noon Fri 11/1, ITE 229, UMBC

Center for Information Security and Assurance
University of Maryland, Baltimore County

Cloud Forensics and its Many Challenges

Ken Zatyko

Assured Information Security, Inc.

12-1pm, Friday 1 November 2013, ITE 229, UMBC

In this presentation, we present a challenge question for today’s cyber experts, cyber scientists, and cyber analysts. Does Locard’s Exchange Principle apply in digital forensics? The dramatic increase in cybercrime and the repeated cyber intrusions into critical infrastructure demonstrate the need for improved security. The Executive Office of the President noted on May 12, 2011 “cyber threat is one of the most serious economic and national security challenges we face as a nation.” We believe addressing whether or not Locard’s Exchange Principle applies to digital forensics is a fundamental question that can guide or limit the scientific search for digital evidence.

Locard’s Exchange Principle is often stated in forensics publications “every contact leaves a trace…” Essentially Locard’s Exchange Principle is applied to crime scenes in which the perpetrator(s) of a crime comes into contact with the scene. The perpetrator(s) will both bring something into the scene, and leave with something from the scene. In the cyber world, the perpetrator may or may not come in physical contact with the crime scene, thus, this brings a new facet to crime scene analysis. According to the World of Forensic Science, Locard’s publications make no mention of an “exchange principle,” although he did make the observation “Il est impossible au malfaiteur d’agir avec l’intensité que suppose l’action criminelle sans laisser des traces de son passage.” (It is impossible for a criminal to act, especially considering the intensity of a crime, without leaving traces of this presence.)

The term “principle of exchange” first appears in Police and Crime-Detection, in 1940, and was adapted from Locard’s observations. The field of digital forensics can be strictly defined as “the application of computer science and investigative procedures for a legal purpose involving the analysis of digital evidence after proper search authority, chain of custody, validation with mathematics, use of validated tools, repeatability, reporting, and possible expert presentation. (Zatyko, 2007).” Furthermore, digital evidence is defined as information stored or transmitted in binary form that may be relied on in court. (National Institute of Justice, 2004). However, digital forensics tools and techniques have also been used by cyber analysts and researchers to conduct media analysis, compile damage assessments, build timelines, and determine attribution. According to the Department of Defense Cyber Crime Center’s training program, cyber analysts require knowledge on how network intrusions occur, how various logs are created, what is electronic evidence, how electronic artifacts are forensically gathered, and the ability to analyze data to produce comprehensive reports and link analysis charts.

Our hypothesis is that Locard’s Exchange Principle does apply to cyber crimes involving computer networks such as identity theft, electronic bank fraud, or denial of service attacks, even if the perpetrator does not need to physically come in contact with the crime scene. Although the perpetrator may make virtual contact with the crime scene through the use of a proxy machine, we believe he will still “leave a trace” and digital evidence will exist. This presentation will explore with audience input “where in the cloud is digital evidence found” and new ways it can lead to attribution. It will explore what new standards and techniques are needed to find these digital traces. Read ahead information can be found at here.

Ken Zatyko was previously the Director of the Department of Defense Computer Forensics Laboratory where he led the largest accredited, internationally recognized, leading-edge computer forensics laboratory located in Maryland. For several months, Mr. Zatyko has been working with NIST on a working group to further standards and technology to solve cloud forensics challenges. Mr. Zatyko is currently the Vice President of Maryland Operations with Assured Information Security.

Host: Dr. Alan T. Sherman,

Defense: Tyler Simon on Task Scheduling for Scalable High Performance Computing

Computing a minimal spanning tree for a large graph is a common problem that can be computationally expensive to do.

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Ph.D. Dissertation Defense

Multiple Objective Task Scheduling
for Scalable High Performance Computing

Tyler A. Simon

12:30-2:30 Friday, 8 November 2013, ITE 325b

Individual processor frequencies have reached an upper physical and practical limit. Processor designs now involve adding multiple processing elements to a single chip to enable increased performance. It is expected that all future processor designs will have multiple cores with a focus on reducing frequencies and adding more individual processing elements (cores) while having to balance power consumption, reliability and maintain high performance.

Due to the increased complexity as well as increased heterogeneity of parallel architectures, petascale and future exascale systems, with the number of processors on the order of 10^8-10^9, must incorporate more intelligent software tools that help manage parallel execution for the user. I demonstrate that by managing the parallel execution environment at runtime, we can satisfy performance tradeoffs for a particular application or application domain for a set of common HPC architectures. It is expected that future exascale computing systems will have to execute programs on many individual and potentially low powered processing elements. These processors need to be fed data efficiently and reliably through the duration of a parallel computation.

In this thesis I provide a performance analysis of two common graph algorithms for finding a minimum spanning tree and evaluate the multicore performance of a common high performance computing (HPC) benchmark on multicore processors. I also develop a novel autonomic execution model and adaptive runtime system (ARRIA) Adaptive Runtime Resource for Intensive Applications. ARRIA is designed with the intent of improving application programmability, scalability and performance while freeing the programmer from explicit message passing and thread management. Experiments are conducted that evaluate ARRIA’s capabilities on data intensive applications, those where the majority of execution time is spent reading and writing either to local or remote memory locations. In my approach, I focus on developing task schedules that satisfy multiple objectives for clusters of compute nodes during runtime. This approach is novel in that it can control application performance and satisfy constraints that are solved using multi objective optimization techniques as the program runs. The development and implementation of the ARRIA runtime system and subsequent optimization criteria likely provide reasonable models for the exascale computing era.

The results of this dissertation demonstrate, experimentally, that for high performance computing systems, a dynamic, task based, parallel programming environment and scheduler can provide lower total workload runtimes and high utilization compared with commonly used static scheduling policies.

Talks: two PhD students talk about their research on quantum computing

Computer Science and Electrical Engineering
Quantum Computing Seminar

Thermal Light N-qubit

Tao Peng (PhD Advisor: Yanhua Shih)
UMBC Physics Department

2:30-3:00 Tuesday, 22 October 2013, ITE 325b

This talk will discuss the equipment and optical elements that required for building the incoherent thermal source, the qubit, and the detection scheme of the intensity fluctuation-fluctuation correlation.

 

All optical XOR, CNOT gates with initial insight
for quantum computation using linear optics

Omar Shehab (PhD Advisor: Samuel Lomonaco)
UMBC CSEE Department

3:00-3:30 pm, Tuesday, 22 October 2013, ITE 325b

The design for an all-optical XOR gate is proposed. The basic idea is to split the input beams, and let them cancel or strengthen each other selectively, or flip the encoded information based on their polarization properties. The information is encoded in terms of polarization of the beam. Polarization of a light beam is well understood, hence, the design should be feasible to implement. The truth table of the optical circuit is worked out and compared with the expected truth table. Then it is demonstrated that the design complies with the linear behavior of the XOR function.

Next, based on a similar idea, the design of an all-optical CNOT gate is proposed. The truth table for the gate is verified. Then, it is discussed how this approach can be used for Linear Optics Quantum Computation (LOQC). It is shown that a Hadamard gate, a rotation gate, and a CNOT gate make up a universal set of quantum gates based on linear optics. This novel approach requires no additional power supply, extra input beam or ancilla photon to operate. It also does not require an expensive and complex single photon source and detector. Only narrowband laser sources are required to operate these gates.

Organizer: Prof. Samuel Lomonaco,

Talk on graduate school and summer research, Noon Wed. 10/16

Dr. Evelyn Erenrich from Rutgers University will talk on “An Inside Look at Graduate School and Summer Research: How to Prepare, Get Accepted, and Succeed” at Noon on Wednesday, October 16 in room 208, Public Policy.

In addition to discussing strategies for research success, Dr. Erenrich will spotlight exciting programs and interdisciplinary training opportunities at Rutgers University, including a summer program, RiSE (Research in Science and Engineering), which has included many UMBC students. Mr. Immanuel Williams, a UMBC alumnus who is currently a doctoral Fellow at Rutgers, will be joining me to give his personal perspective. Students can also sign up for individual appointments by contacting Ms. Alicia Hall, .

talk: Thermal light N-qubits (part II), 2:30 Tue 10/8 ITE325b

UMBC Quantum Computation Seminar

Thermal light N-qubits, part II

Yanhau Shih

Physics Department, UMBC

2:30-4:00 Tuesday, 8 October 2013, ITE 325b

This talk will discuss a few recent experiments on Nth-order interference of N independent and incoherent thermal fields from their intensity fluctuation correlation measurement. The observed interference is similar to that of entangled states. These experiments have demonstrated the possibility of producing N-qubits from N incoherent thermal fields.

Yanhua Shih received his B.S. degree in theoretical physics from Northwestern University of China in 1981, and his Ph.D. in physics from the University of Maryland, College Park, in 1987. He joined the Faculty of the University of Maryland, Baltimore County in 1989 and established the Quantum Optics Program at UMBC. He is currently Professor of Physics at UMBC. His research interests include fundamental problems of quantum theory and general relativity.

Organizer: Prof. Samuel Lomonaco,

talk: Seymour on Quantum Computing and Cybersecurity, Noon Fri. 10/4, ITE228

UMBC Center for Information Security and Assurance

Quantum Computing and Cybersecurity

John Seymour

Noon-1:00 Friday, 4 October 2013
Cyber Defense Lab, room 228 ITE, UMBC

This talk will be a brief introduction to the topic of quantum computing for the computer scientist interested in cybersecurity. It will begin with a light summary of the fundamental quantum algorithms and move to discuss the recent advances in quantum computing, including the D-Wave quantum optimizer, University of Bristol’s new quantum chip, quantum programming languages, and more. Finally, it will introduce some current research questions and projects residing in the intersection of quantum computing and cybersecurity.

John Seymour is a Ph.D. student in the UMBC computer science graduate program. As a UMBC undergraduate, he was a triple major — Computer Science, — Mathematics and Philosophy. He is currently working on three research projects: evaluation of a detection protocol for Man-in-the-Middle attacks, a web-based game for teaching students basic concepts of internet security, and integration of social media with internet voting to facilitate collaborative decision making.

Host: Dr. Alan T. Sherman,

1 32 33 34 35 36 58