talk: Problem with Print: publishing born digital scholarship, 4pm 11/25

The Problem with Print: publishing born digital scholarship

Professor Helen J. Burgess
Department of English, UMBC

4:00pm Monday, 25 November 2013

Gallery, A. O. Kuhn Library

Dr. Burgess will discuss some of the difficulties for academics seeking to work and publish outside traditional “print-bound” models of humanities scholarship – including issues of professional evaluation and distribution – and show some examples of “born digital” works that would benefit from a new model of publishing. A reception, sponsored by the Libby Kuhn Endowment Fund, will follow the program.

Helen J. Burgess is an Assistant Professor of English in the Communication and Technology track. Dr Burgess received her BA(Hons) and MA(Dist.) in English Language and Literature from Victoria University of Wellington, in New Zealand, and her PhD in English from West Virginia University. She is active in the new media research community as editor of the online journal Hyperrhiz: new Media Cultures, and technical editor of Rhizomes: Cultural Studies in Emerging Knowledge. Dr Burgess is coauthor of Red Planet: Scientific and Cultural Enounters with Mars and Biofutures: Owning Body Parts and Information, both titles published in the Mariner10 interactive DVD-Rom series at the University of Pennsylvania Press. She has interests in multimedia and web development, open source and open content production, electronic literature, and science fiction.

PhD defense: Leschke on Vizualization for Digital Forensic Data

Ph.D. Dissertation Defense
Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Applying Data Visualization Techniques to Support the
Analysis of Digital Forensic Data

Timothy Leschke

10:00am-Noon Friday 22 November 2013, ITE 456

The Modern Age of digital forensics is characterized by a proliferation of artifacts, increased data complexity, larger and cheaper data storage, and the emergence of the need for tools that support timeline analysis, anomaly detection, and triage. Traditional text-based digital forensic tools can no longer keep pace with the demands of the modern digital forensic examiner. A new approach for developing digital forensic tools is required if digital forensics is going to avoid becoming stagnant.

We apply the power of data visualization to support the needs of the modern digital forensic examiner. We design and develop a tool called Change-Link; a coordinated and multiple view tool which uses semantic zooming in the form of an overview, treeview, directory content view, and a metadata view to provide an understanding of digital forensic data that changes over time. By using this tool to examine a mock evidence hard drive containing shadow volume data provided by the Microsoft Volume Shadow Copy Service, we demonstrate a way to reduce data complexity and provide better forensic data analysis while supporting timeline analysis, anomaly detection, and a triage of the dataset.

We demonstrate a proof for our broader hypothesis which is data visualization techniques can be developed to support better analysis of digital forensic data.

Committee: Drs. Charles Nicholas (chair), Konstantinos Kalpakis, Dhananjay Phatak, Jian Chen, Clay Shields (Georgetown Univ.), Daniel Quist

PhD defense: Oehler on Private Packet Filtering, 11/21

from wikipedia

Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Ph.D. Dissertation Defense

Private Packet Filtering Searching for Sensitive Indicators
without Revealing the Indicators in Collaborative Environments

Michael John Oehler

10:30-12:30 Thursday, 21 November 2013, ITE 325

Private Packet Filtering (PPF) is a new capability that preserves the confidentiality of sensitive attack indicators, and retrieves network packets that match those indicators without revealing the matching packets. The capability is achieved through the definition of a high-level language, the definition of a conjunction operator that expands the breadth of the language, a simulation of the document detection and recovery rates of the output buffer, and through a description of applicable system facets. Fundamentally, PPF uses a private search mechanism that in turn relies on the (partial) homomorphic property of the Paillier cryptosystem. PPF is intended for use in a collaborative environment involving a cyber defender and a partner: The defender has access to a set of sensitive indicators, and is willing to share some of those indicators with the partner. The partner has access to network data, and is willing to share that access. Neither is willing to provide full access. Using the language, the defender creates an encrypted form of the sensitive indicators, and passes the encrypted indicators to the partner. The partner then uses the encrypted indicators to filter packets, and returns an encrypted packet capture file. The partner does not decrypt the indicators and cannot identify which packets matched. The defender decrypts, reassembles the matching packets, gains situational awareness, and notifies the partner of any malicious activity. In this sense, the defender reveals only the observed indicator and retains control of all other indicators. PPF allows both parties to gain situational awareness of malicious activity, and to retain control without exposing every indicator or all network data.

Committee: Dhananjay Phatak (chair), Michael Collins, Josiah Dykstra, Russell Fink, John Pinkston and Alan Sherman

PhD defense: Xianshu Zhu, Finding Story Chains and Creating Story Maps in Newswire Articles

Ph.D. Dissertation Defense
Computer Science and Electrical Engineering
University of Maryland, Baltimore County

Finding Story Chains and Creating Story Maps in Newswire Articles

Xianshu Zhu

10:00-12:00pm Monday 25 November 2013, ITE 325B

There are huge amounts of news articles about events published on the Internet everyday. The flood of information on the Internet can easily swamp people, which seems to produce more pain than gain. While there are some excellent search engines, such as Google, Yahoo and Bing, to help us retrieve information by simply providing keywords, the problem of information overload makes it hard to understand the evolution of a news story. Conventional search engines display unstructured search results, which are ranked by relevance using keyword-based ranking methods and other more complicated ranking algorithms. However, when it comes to searching for a story (a sequence of events), none of the ranking algorithms above can organize the search results by evolution of the story. Limitations of unstructured search results include: (1) Lack of the big picture on complex stories. In general, news articles tend to describe the news story from different perspectives. For complex news stories, users can spend significant time looking through unstructured search results without being able to see the big picture of the story. For instance, Hurricane Katrina struck New Orleans on August 23, 2005. By typing “Hurricane Katrina” in Google, people can get much information about the event and its impact on the economy, health, and government policies, etc. However, people may feel desperate to sort the information to form a story chain that tells how, for example, Hurricane Katrina has impacted government policies. (2) Hard to find hidden relationships between two events: The connections between news events are sometimes extremely complicated and implicit. It is hard for users to discover the connections without thorough investigation of the search results.

In this dissertation, we seek to extend the capability of existing search engines to output coherent story chains and story maps (a map that demonstrates various perspectives on news events), rather than loosely connected pieces of information. By this means, people can obtain a better understanding of the news story, capture the big picture of the news story quickly, and discover hidden relationships between news events. First of all, algorithms for finding story chains have the following two advantages: (1) they can find out how two events are correlated by finding a chain of events that coherently connect them together. Such story chains will help people discover hidden relationship between two events. (2) they allow users to search by complex queries such as “how is event A related to event B”, which does not work well on conventional keyword-based search engines. Secondly, creating story maps by finding different perspectives on a news story and grouping news articles by the perspectives can help users better capture the big picture of the story and give them suggestions on what directions they can further pursue. From a functionality point of view, the story map is similar to the table of content of a book which gives users a high-level overview of the story and guides them during news reading process.

The specific contributions of this dissertation are: (1) Develop various algorithms to find story chains, including: (a) random walk based story chain algorithm; (b) co-clustering based story chain algorithm which further improves the story chains by grouping semantically close words together and propagating the relevance of word nodes to document nodes; (c) finding story chains by extracting multi-dimensional event profiles from unstructured news articles, which aims to better capture relationships among news events. This algorithm significantly improves the quality of the story chains. (2) Develop an algorithm to create story maps which uses Wikipedia as the knowledge base. News articles are represented in the form of bag-of-aspects instead of bag-of-words. Bag-of-aspects representation allows users to search news articles through different aspects of a news event but not through simple keywords matching.

Committee: Drs. Tim Oates (chair), Tim Finin, Charles Nicholas, Sergei Nirenburg and Doug Oard

Council of Computing Majors to meet 12-12:45 Wed, Nov 20 in BIO LH1

MPSSL

The UMBC Council of Computing Majors (CCM) will meet from Noon to 12:45pm on Wednesday, November 20 in BIO 101 (Lecture Hall 1 in the building behind the Biology Building). The CCM is a student organization representing undergraduate computer science and computer engineering majors and anyone else with an interest in computing. Everyone is welcome.

At this meeting, students from Professor Nilanjan Banerjee’s Mobile, Pervasive and Sensor Systems Lab (MPSSL) will talk about their research. Their lab currently focuses on application areas that include renewable energy, healthcare applications and mobile phone systems as well as theoretical work on network topology compression and analytical modeling of hybrid mobile networks.

2014 Global Game Jam at UMBC, Jan 24-26

For the 6th year in a row, UMBC is the Baltimore host site for the Global Game Jam, which will be held from 5:00pm Friday January 24 to 5:00pm Sunday January 26. Many Global Game Jam sites charge admission. Thanks to the generous support of Next Century, the UMBC site will again be free, meals included. So we can plan appropriately, we do require advance registration.

In a game jam, participants come together to make a video game. Each participant joins a small team at the jam, and over a couple of day period creates a new, unique and creative video game according to the rules of the jam. Game Jams are a great way to meet other developers, beef up your resume, or just learn what it takes to make a game. Teams need designers who can come up with a creative game idea according to the jam constraints, artists, programmers and testers, so there is something to do for participants at all levels of experience.

The Global Game Jam takes place in the same 48 hours all over the world! Last year had more than 300 host sites across the world.  All participants will be constrained by the same theme and set of rules. After the theme is announced, participants will have the chance to brainstorm game ideas and pitch them to other participants to form development teams. After a couple of mad days of game development, all the games are demoed and submitted to the global game jam site.

Even if you don’t participate, you can track the action on twitter by following @globalgamejam and monitoring #ggj14 and #umbcggj, and try out the game submissions after the event is over.

talk: Four Quantum Algorithms, 2:30 Tue 11/19, ITE 325

from wikipedia

Computer Science and Electrical Engineering
Quantum Computing Seminar

Four Quantum Algorithms

Samuel Lomonaco, CSEE, UMBC

2:30-3:00 Tuesday, 19 November 2013, ITE 325b

In this lecture, we discuss four quantum algorithms, i.e., Deutsch’s algorithm, Simon’s algorithm, Shor’s algorithm, and Grover’s algorithm. No prior knowledge of quantum mechanics will be assumed. These talks are based on four invited lectures (slides) given at Oxford University for the UMBC audience.

Samuel J. Lomonaco is a professor at the Department of Computer Science and Electrical Engineering of the University of Maryland Baltimore County. He is internationally known for his many contributions in mathematics and in computer science. His research interests span a wide range of subjects from knot theory, algebraic and differential topology to algebraic coding theory, quantum computation, and symbolic computation. In quantum cryptography, he has shown how quantum information theory can be used to gain a better understanding of eavesdropping with quantum entanglement. In quantum computation, he has shown how Lie groups can be used to solve problems arising in the study of quantum entanglement. In 2000 Professor Lomonoco organized the first American Mathematical Society short course on quantum computation.

Organizer: Prof. Samuel Lomonaco,

Rick Forno speaking at Cyber Education Symposium

CSEE’s Dr. Richard Forno, Cybersecurity GPD and Assistant Director of UMBC’s Center for Cybersecurity, is speaking at the NCSI Cybersecurity Education Symposium in Arlington, Virginia on Tuesday, November 19th. The panel session, “Creating the Cyber Curricula for Success” will explore the interdisciplinary nature of cybersecurity and cybersecurity education at the undergraduate and graduate levels and discuss the right balance of disciplines, knowledge, skills, and attributes required to prepare future cybersecurity practitioners.

Also speaking later that day are Ellen Hemmerly (Executive Director and President, UMBC Research Park Corporation) and Kent Malwitz (President, UMBC Training Centers) on a panel session entitled “The Business Model for a Cybersecurity Program.”

talk: Human Computing Capacity and Future Human Development, Mon 11/18

IBM_Blue_Gene_P_supercomputer from wikipedia

Center for Hybrid Multicore Productivity Research
Distinguished Computational Science Lecture Series

Human Computing Capacity and Future Human Development

Professor Bezalel Gavish
Information Technology and Operations Management
Southern Methodist University, Dallas TX

2:30pm Monday, 18 November 2013, ITE 325B, UMBC

This talk introduces bounds on future computers’ processing capacity and analyzes the possibilities for their realization in the long run. The analysis shows the existence of hard limits on the progress in processing capacity, which in turn generates bounds on future computing capacity. The results show that it is unlikely that some of the predictions on future computing capabilities will ever be achieved. The capacity bounds stem from fundamental physical limitations, which generate the relatively tight bounds. Different bounds have been developed that will be reached much faster than expected when compared to using simple traditional forecasting methods. This will be discussed in the lecture.

Assuming that computational activities like decision making, processing, vision, control, auditory and sensing activities of human beings require energy, the above energy based results generate upper bounds on the computational capacity (in the broadest sense) of human beings. The results are architecture independent and have direct impact on research on models of the brain and provide bounds on the cognitive abilities of human beings. A byproduct of this line of research is providing some new conjectures on the past and future physical development of the human species.

Professor Bezalel Gavish holds the Eugene J. and Ruth F. Constantin Distinguished Chair at Southern Methodist University in Dallas, Texas. He was the Chairman of the Information Technology and Operations Management department at the Cox business School. Professor Gavish is the founding Chairman of the International Conference on Telecommunications Systems Management and the International Conference on Networking and Electronic Commerce. He is the Editor-in-Chief of two top ranked research oriented journals, the Telecommunication Systems Journal, and of the Electronic Commerce Research Journal; serves as an Editorial board member of the Wireless Networks journal, Networks, Annals of Information Systems; was Telecommunications Departmental Editor for the Operations Research journal and Department Editor of Distributed Systems in ORSA Journal on Computing; and serves on the editorial boards of Computers and Operations Research, Annals of Mathematics of Artificial Intelligence, INFOR, Mathematics of Industrial Systems, Combinatorial Optimization: Theory and Practice, and Pesquisa Operacional. Prof. Gavish has published over 100 papers in his areas of expertise. He received the Ph.D. (1975) in operations research from the Technion, Israel Institute of Technology.

talk: Ophir Frieder on Collective Intelligence, Noon Wed 11/13

from http://en.wikipedia.org/wiki/File:Sort_sol_pdfnet.jpg

UMBC Information Systems Department
Distinguished Lecture for the Fall

Collective Intelligence

Dr. Ophir Frieder

Georgetown University

12:00-1:00pm Wednesday, 13 November, ITE 456

Collective Intelligence is group intelligence generated by the collaboration of many individuals. However, such intelligence is only as powerful as one’s ability to digest it. Thus, after describing two recent efforts, the first focusing on early disease detection using microblogs and the second focusing on collaborative tag labeling. Potentially, I will likewise describe an older effort that effectively integrates information and comment on its potential for the future.

Ophir Frieder holds the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt L.C.H.S. Chair in Computer Science and Information Processing and is Chair of the Department of Computer Science at Georgetown University. He is also Professor of Biostatistics, Bioinformatics and Biomathematics in the Georgetown University Medical Center. He is a Fellow of the AAAS, ACM, and IEEE.

1 72 73 74 75 76 142