PhD Defense: Chris Morris, Multi-Modal Saliency Fusion for Illustrative Image Enhancement

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Ph.D. Dissertation Defense

Multi-Modal Saliency Fusion for Illustrative Image Enhancement

Christopher J. Morris

10:30-12:30, Wednesday, 15 January 2013, ITE 365 & 352

Digitally manipulated or augmented images are increasingly prevalent. Multisensor systems produce augmented images that integrate data into a single context. Mixed-reality images are generated from insertion of computer generated objects into a natural scene. Digital processing for application-specific tasks (e.g., compression or network transmission) can create images distorted with processing artifacts. Digital image augmentation can lead to the inclusion of artifacts that influence the perception of the image.

Visual cues (e.g., depth or size cues) may no longer be perceptually consistent in an augmented image. A feature deemed important in its local context may no longer be in the broader integrated context. Inserted synthetic objects may not possess the appropriate visual cues for proper perception of the overall scene. Finer cues that distinguish critical features may be lost in compressed images. Enhancing augmented images to add or restore visual cues can improve the image’s perceptibility.

This dissertation presents a framework for illustrating images to enhance critical features. The enhancements improve the perception and comprehension of the augmented image. The framework uses a linear combination of image (2D), surface topology (3D), and task based saliency measures to identify the critical features in the image. The use of multi-modal saliency allows the visualization designer to adjust the definition of critical features based on the attributes of the scene and the task at hand. Upon identification, the features are enhanced using a non-photorealistic rendering (NPR) deferred illustration technique. The enhancements, inspired by an analysis of artists’ techniques, bolster the features’ perceptual cues.

To measure the amount of similar salient features between the enhanced image and the original image, the framework describes the Saliency Similarity Metric (SSM). The SSM is feedback with which to make informed decisions to tune the visualization. The benefits of illustrative enhancement are analyzed using objective and subjective evaluations. Using conventional metrics, illustrative enhancements improve the perceptual image quality of images distorted by noise or compression artifacts. User survey results reveal that enhancements must be carefully applied for perceptual improvement. The framework can be effectively utilized in mobile rendering, augmented reality, and sensor fusion applications.

Committee: Drs. Penny Rheingans (chair), Dan Bailey, Jian Chen, Thomas Jackman (Desert Research Institute), Anupam Joshi and Marc Olano

talk: Lomonaco on Shor's Algorithm (part 2), 2:30-3:00 Tue 12/3

from wikipedia

Computer Science and Electrical Engineering
Quantum Computing Seminar

Shor’s Algorithm Part 2

Samuel Lomonaco, CSEE, UMBC

2:30-3:00 Tuesday, 3 December 2013, ITE 325b

As requested in the last seminar, we will devote this seminar to stepping through the complete Shor algorithm (from beginning to end) to factor the “enormous” integer 21. This talk will based on the example found at the beginning of the following paper.

Quantum hidden subgroup algorithms: A mathematical perspective, AMS CONM, vol. 305, (2002), 139 – 202.

Samuel J. Lomonaco is a professor at the Department of Computer Science and Electrical Engineering of the University of Maryland Baltimore County. He is internationally known for his many contributions in mathematics and in computer science. His research interests span a wide range of subjects from knot theory, algebraic and differential topology to algebraic coding theory, quantum computation, and symbolic computation. In quantum cryptography, he has shown how quantum information theory can be used to gain a better understanding of eavesdropping with quantum entanglement. In quantum computation, he has shown how Lie groups can be used to solve problems arising in the study of quantum entanglement. In 2000 Professor Lomonoco organized the first American Mathematical Society short course on quantum computation.

Organizer: Prof. Samuel Lomonaco,

talk: Simson Garfinkel on Finding privacy leaks and stolen data with bulk data analysis

from wikipedia

Center for Information Security and Assurance
University of Maryland, Baltimore County

Finding privacy leaks and stolen data with
bulk data analysis and optimistic decoding

Dr. Simson Garfinkel
Naval Postgraduate School

12:00-1:00 Friday, 6 December 2013, ITE 229

Modern digital forensics tools are largely based on the recovery and analysis of files. This talk explores how identity information such as email addresses, credit card numbers, and other of information can be more efficiently found using bulk data analysis, and how results are significantly improved through the use of optimistic decompression. Together, these techniques can find important information on computer media that are ignored by the majority of today’s digital forensics tools.

This talk presents the results of a study of roughly 5000 hard drives purchased on the secondary market and shows how different kinds of data formats can be traced to different kinds of privacy leaks and coding errors. It shows how the results were generated using bulk_extractor, an easy-to-use open source digital forensics tool. Finally, it shows how bulk_extractor was extended to detect data obscured with a simple steganographic technique (XOR 255), and how a subsequence re-analysis of the research corpus found significant use of the technique in commercial software, malware, and by at least one computer criminal.

Dr. Simson L. Garfinkel is an Associate Professor at the Naval Postgraduate School. Based in Arlington VA, Garfinkel’s research interests include digital forensics, usable security, data fusion, information policy and terrorism. He holds six US patents for his computer-related research and has published dozens of research articles on security and digital forensics.

Garfinkel is the author or co-author of fourteen books on computing. He is perhaps best known for his book Database Nation: The Death of Privacy in the 21st Century. Garfinkel’s most successful book, Practical UNIX and Internet Security (co-authored with Gene Spafford), has sold more than 250,000 copies and been translated into more than a dozen languages since the first edition was published in 1991.

Garfinkel received three Bachelor of Science degrees from MIT in 1987, a Master’s of Science in Journalism from Columbia University in 1988, and a Ph.D. in Computer Science from MIT in 2005.

Host: Dr. Alan T. Sherman,

talk: Problem with Print: publishing born digital scholarship, 4pm 11/25

The Problem with Print: publishing born digital scholarship

Professor Helen J. Burgess
Department of English, UMBC

4:00pm Monday, 25 November 2013

Gallery, A. O. Kuhn Library

Dr. Burgess will discuss some of the difficulties for academics seeking to work and publish outside traditional “print-bound” models of humanities scholarship – including issues of professional evaluation and distribution – and show some examples of “born digital” works that would benefit from a new model of publishing. A reception, sponsored by the Libby Kuhn Endowment Fund, will follow the program.

Helen J. Burgess is an Assistant Professor of English in the Communication and Technology track. Dr Burgess received her BA(Hons) and MA(Dist.) in English Language and Literature from Victoria University of Wellington, in New Zealand, and her PhD in English from West Virginia University. She is active in the new media research community as editor of the online journal Hyperrhiz: new Media Cultures, and technical editor of Rhizomes: Cultural Studies in Emerging Knowledge. Dr Burgess is coauthor of Red Planet: Scientific and Cultural Enounters with Mars and Biofutures: Owning Body Parts and Information, both titles published in the Mariner10 interactive DVD-Rom series at the University of Pennsylvania Press. She has interests in multimedia and web development, open source and open content production, electronic literature, and science fiction.

PhD defense: Oehler on Private Packet Filtering, 11/21

from wikipedia

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Ph.D. Dissertation Defense

Private Packet Filtering Searching for Sensitive Indicators
without Revealing the Indicators in Collaborative Environments

Michael John Oehler

10:30-12:30 Thursday, 21 November 2013, ITE 325

Private Packet Filtering (PPF) is a new capability that preserves the confidentiality of sensitive attack indicators, and retrieves network packets that match those indicators without revealing the matching packets. The capability is achieved through the definition of a high-level language, the definition of a conjunction operator that expands the breadth of the language, a simulation of the document detection and recovery rates of the output buffer, and through a description of applicable system facets. Fundamentally, PPF uses a private search mechanism that in turn relies on the (partial) homomorphic property of the Paillier cryptosystem. PPF is intended for use in a collaborative environment involving a cyber defender and a partner: The defender has access to a set of sensitive indicators, and is willing to share some of those indicators with the partner. The partner has access to network data, and is willing to share that access. Neither is willing to provide full access. Using the language, the defender creates an encrypted form of the sensitive indicators, and passes the encrypted indicators to the partner. The partner then uses the encrypted indicators to filter packets, and returns an encrypted packet capture file. The partner does not decrypt the indicators and cannot identify which packets matched. The defender decrypts, reassembles the matching packets, gains situational awareness, and notifies the partner of any malicious activity. In this sense, the defender reveals only the observed indicator and retains control of all other indicators. PPF allows both parties to gain situational awareness of malicious activity, and to retain control without exposing every indicator or all network data.

Committee: Dhananjay Phatak (chair), Michael Collins, Josiah Dykstra, Russell Fink, John Pinkston and Alan Sherman

PhD defense: Xianshu Zhu, Finding Story Chains and Creating Story Maps in Newswire Articles

Ph.D. Dissertation Defense
Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Finding Story Chains and Creating Story Maps in Newswire Articles

Xianshu Zhu

10:00-12:00pm Monday 25 November 2013, ITE 325B

There are huge amounts of news articles about events published on the Internet everyday. The flood of information on the Internet can easily swamp people, which seems to produce more pain than gain. While there are some excellent search engines, such as Google, Yahoo and Bing, to help us retrieve information by simply providing keywords, the problem of information overload makes it hard to understand the evolution of a news story. Conventional search engines display unstructured search results, which are ranked by relevance using keyword-based ranking methods and other more complicated ranking algorithms. However, when it comes to searching for a story (a sequence of events), none of the ranking algorithms above can organize the search results by evolution of the story. Limitations of unstructured search results include: (1) Lack of the big picture on complex stories. In general, news articles tend to describe the news story from different perspectives. For complex news stories, users can spend significant time looking through unstructured search results without being able to see the big picture of the story. For instance, Hurricane Katrina struck New Orleans on August 23, 2005. By typing “Hurricane Katrina” in Google, people can get much information about the event and its impact on the economy, health, and government policies, etc. However, people may feel desperate to sort the information to form a story chain that tells how, for example, Hurricane Katrina has impacted government policies. (2) Hard to find hidden relationships between two events: The connections between news events are sometimes extremely complicated and implicit. It is hard for users to discover the connections without thorough investigation of the search results.

In this dissertation, we seek to extend the capability of existing search engines to output coherent story chains and story maps (a map that demonstrates various perspectives on news events), rather than loosely connected pieces of information. By this means, people can obtain a better understanding of the news story, capture the big picture of the news story quickly, and discover hidden relationships between news events. First of all, algorithms for finding story chains have the following two advantages: (1) they can find out how two events are correlated by finding a chain of events that coherently connect them together. Such story chains will help people discover hidden relationship between two events. (2) they allow users to search by complex queries such as “how is event A related to event B”, which does not work well on conventional keyword-based search engines. Secondly, creating story maps by finding different perspectives on a news story and grouping news articles by the perspectives can help users better capture the big picture of the story and give them suggestions on what directions they can further pursue. From a functionality point of view, the story map is similar to the table of content of a book which gives users a high-level overview of the story and guides them during news reading process.

The specific contributions of this dissertation are: (1) Develop various algorithms to find story chains, including: (a) random walk based story chain algorithm; (b) co-clustering based story chain algorithm which further improves the story chains by grouping semantically close words together and propagating the relevance of word nodes to document nodes; (c) finding story chains by extracting multi-dimensional event profiles from unstructured news articles, which aims to better capture relationships among news events. This algorithm significantly improves the quality of the story chains. (2) Develop an algorithm to create story maps which uses Wikipedia as the knowledge base. News articles are represented in the form of bag-of-aspects instead of bag-of-words. Bag-of-aspects representation allows users to search news articles through different aspects of a news event but not through simple keywords matching.

Committee: Drs. Tim Oates (chair), Tim Finin, Charles Nicholas, Sergei Nirenburg and Doug Oard

talk: Four Quantum Algorithms, 2:30 Tue 11/19, ITE 325

from wikipedia

Computer Science and Electrical Engineering
Quantum Computing Seminar

Four Quantum Algorithms

Samuel Lomonaco, CSEE, UMBC

2:30-3:00 Tuesday, 19 November 2013, ITE 325b

In this lecture, we discuss four quantum algorithms, i.e., Deutsch’s algorithm, Simon’s algorithm, Shor’s algorithm, and Grover’s algorithm. No prior knowledge of quantum mechanics will be assumed. These talks are based on four invited lectures (slides) given at Oxford University for the UMBC audience.

Samuel J. Lomonaco is a professor at the Department of Computer Science and Electrical Engineering of the University of Maryland Baltimore County. He is internationally known for his many contributions in mathematics and in computer science. His research interests span a wide range of subjects from knot theory, algebraic and differential topology to algebraic coding theory, quantum computation, and symbolic computation. In quantum cryptography, he has shown how quantum information theory can be used to gain a better understanding of eavesdropping with quantum entanglement. In quantum computation, he has shown how Lie groups can be used to solve problems arising in the study of quantum entanglement. In 2000 Professor Lomonoco organized the first American Mathematical Society short course on quantum computation.

Organizer: Prof. Samuel Lomonaco,

talk: Human Computing Capacity and Future Human Development, Mon 11/18

IBM_Blue_Gene_P_supercomputer from wikipedia

Center for Hybrid Multicore Productivity Research
Distinguished Computational Science Lecture Series

Human Computing Capacity and Future Human Development

Professor Bezalel Gavish
Information Technology and Operations Management
Southern Methodist University, Dallas TX

2:30pm Monday, 18 November 2013, ITE 325B, UMBC

This talk introduces bounds on future computers’ processing capacity and analyzes the possibilities for their realization in the long run. The analysis shows the existence of hard limits on the progress in processing capacity, which in turn generates bounds on future computing capacity. The results show that it is unlikely that some of the predictions on future computing capabilities will ever be achieved. The capacity bounds stem from fundamental physical limitations, which generate the relatively tight bounds. Different bounds have been developed that will be reached much faster than expected when compared to using simple traditional forecasting methods. This will be discussed in the lecture.

Assuming that computational activities like decision making, processing, vision, control, auditory and sensing activities of human beings require energy, the above energy based results generate upper bounds on the computational capacity (in the broadest sense) of human beings. The results are architecture independent and have direct impact on research on models of the brain and provide bounds on the cognitive abilities of human beings. A byproduct of this line of research is providing some new conjectures on the past and future physical development of the human species.

Professor Bezalel Gavish holds the Eugene J. and Ruth F. Constantin Distinguished Chair at Southern Methodist University in Dallas, Texas. He was the Chairman of the Information Technology and Operations Management department at the Cox business School. Professor Gavish is the founding Chairman of the International Conference on Telecommunications Systems Management and the International Conference on Networking and Electronic Commerce. He is the Editor-in-Chief of two top ranked research oriented journals, the Telecommunication Systems Journal, and of the Electronic Commerce Research Journal; serves as an Editorial board member of the Wireless Networks journal, Networks, Annals of Information Systems; was Telecommunications Departmental Editor for the Operations Research journal and Department Editor of Distributed Systems in ORSA Journal on Computing; and serves on the editorial boards of Computers and Operations Research, Annals of Mathematics of Artificial Intelligence, INFOR, Mathematics of Industrial Systems, Combinatorial Optimization: Theory and Practice, and Pesquisa Operacional. Prof. Gavish has published over 100 papers in his areas of expertise. He received the Ph.D. (1975) in operations research from the Technion, Israel Institute of Technology.

talk: Ophir Frieder on Collective Intelligence, Noon Wed 11/13

from http://en.wikipedia.org/wiki/File:Sort_sol_pdfnet.jpg

UMBC Information Systems Department
Distinguished Lecture for the Fall

Collective Intelligence

Dr. Ophir Frieder

Georgetown University

12:00-1:00pm Wednesday, 13 November, ITE 456

Collective Intelligence is group intelligence generated by the collaboration of many individuals. However, such intelligence is only as powerful as one’s ability to digest it. Thus, after describing two recent efforts, the first focusing on early disease detection using microblogs and the second focusing on collaborative tag labeling. Potentially, I will likewise describe an older effort that effectively integrates information and comment on its potential for the future.

Ophir Frieder holds the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt L.C.H.S. Chair in Computer Science and Information Processing and is Chair of the Department of Computer Science at Georgetown University. He is also Professor of Biostatistics, Bioinformatics and Biomathematics in the Georgetown University Medical Center. He is a Fellow of the AAAS, ACM, and IEEE.

Talk: Niloy Ganguly on Topical Search in Twitter, 1pm Tue 11/5, ITE 459

465197793_41036dcc3a_o

UMBC Information Systems Department

Topical Search in Twitter

Dr. Niloy Ganguly

Indian Institute of Technology Kharagpur

1:00pm Tuesday, 5 November 2013, ITE 459

Twitter is now a popular platform for discovering real-time news on various topics. We are developing methodologies to improve topical search in Twitter, specifically search for topical experts and popular content on specific topics. Utilizing social annotations provided by the Twitter population through the Lists feature, we have developed the following:

  • A novel who-is-who system for Twitter, which gives the topical attributes of a specified user. The list-based methodology gives accurate and comprehensive topical attributes for millions of popular Twitter users.
  • A search system for topical experts in Twitter. Comparison of our system with the expert search service offered by Twitter shows that the List-based method provide better results for a large number of topical queries.
  • A novel topical search system which, given a topic, identifies and clusters the content (tweets, hashtags) being discussed by the community of experts on that topic. Our methodology gives relevant and trustworthy content for a wide range of topics. To the best our knowledge, this is the first systematic attempt to utilize social annotations to provide topical search in Twitter.

Niloy Ganguly is an associate professor in the department of computer science and engineering, Indian Institute of Technology Kharagpur. He received his PhD from Bengal Engineering and Science University, Calcutta, India and his Bachelors in Computer Science and Engineering from IIT Kharagpur. He has been a post doctoral fellow in Technical University of Dresden, Germany. He focuses on investigating several different aspects on online-social networks. He has worked on designing recommendation system based on community structures on various web-social networks like Twitter and Delicious. He has also simultaneously worked on various theoretical issues related to dynamical large networks often termed as complex networks. Specifically he has looked into problems related to percolation, evolution of networks as well as flow of information over these networks. He has been collaborating with various national and international universities and research lab including Duke University, TU Dresden, Germany, MPI PKS and MPI SWS, Germany, Microsoft Lab, India etc. He currently publishes in various top ranking international journals and conferences including CCS, PODC, ICDM, ACL, WWW, INFOCOM, SIGIR, Euro Physics Letters, Physical Review E, ACM and IEEE Transactions, etc.

1 31 32 33 34 35 58