Commodity hardware description languages (HDLs) like VHDL and Verilog present a challenge from a high assurance point of view because they lack formalized semantics: when a team of hardware engineers produces a circuit design in a commodity HDL and claims that it correctly implements a pseudocode algorithm, on what basis can that claim be evaluated? A formalized model of the circuit design may be painstakingly created (e.g., in the logic of a theorem prover), but how are the accuracy and faithfulness of that model then established? The distance between the widely adopted commodity HDLs and formal models of hardware has been a well-recognized and persistent impediment to driving formal methods into hardware development.
This talk presents a technique developed at Two Six Technologies, called model validation, that formally connects hardware design and its formal model via a functional, high-level synthesis language called ReWire. Model validation introduces a “model” program to bridge the gap between the hardware design and algorithm by establishing 1) the equivalence of the algorithm to the model and 2) the equivalence of the model to the circuit design. Equivalence between the algorithm and the ReWire model is verified with a ReWire semantics formalized in Isabelle. Equivalence between the ReWire model and the circuit design is established by producing binary circuits from each (using commodity synthesis tools and the ReWire compiler rwc) and then applying an automated binary equivalence checker.
This talk describes our experience applying model validation as part of the DARPA Data Protection in Virtual Environments (DPRIVE) program. DPRIVE aims to develop a novel hardware accelerator to ease computational challenges preventing widespread use of fully homomorphic encryption (FHE) that began with Gentry’s discovery and was improved upon in the PROCEED program. To this end, DPRIVE’s purpose is to design hardware accelerators to improve upon the existing algorithmic gains to FHE. Model validation through ReWire moves formal methods into the practical world, empowering hardware designers to reason about the correctness, safety, and security properties of their designs. In addition, we expect our pipeline to protect hardware supply chains by allowing for a full formal analysis of RTL implementations before tape out.
Ian Blumenfeld is the Research Director for Mathematics at Two Six Technologies. In that role, he is a principal investigator on multiple DARPA programs, spanning the areas of formal methods, modern cryptography, and applied category theory. Prior to his work at Two Six, Ian was a formal verification engineer at Apple, where he verified cryptographic properties of the iPhone secure enclave processor. Ian has worked in roles in and around the federal research space for more than a decade, including five years as an applied research mathematician at the National Security Agency. Ian is currently enrolled as a part-time Ph.D. student in the UMBC computer science department, working with Dr. Alan Sherman and Dr. Don Engel. Email:
Host: Alan T. Sherman, . Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1pm. All meetings are open to the public. Upcoming CDL Meeting: May 13, Enka Blanchard (Digitrust Loria, France)
We study a new approach to accelerating machine learning problems in this talk. The system comprises multiple agents, each with a set of local data points and an associated local cost function. The agents are connected to a server, and there is no inter-agent communication. The agents’ goal is to learn a parameter vector that optimizes the aggregate of their local costs without revealing their local data points. We propose an iterative preconditioning technique to mitigate the deleterious effects of the cost function’s conditioning on the convergence rate of distributed gradient-descent. Unlike the conventional preconditioning techniques, the pre-conditioner matrix in our proposed technique updates iteratively to facilitate implementation on the distributed network. In the particular case when the minimizer of the aggregate cost is unique, our algorithm converges superlinearly. We demonstrate our algorithm’s superior performance in machine learning, distributed estimation, and beamforming problems, thereby demonstrating the proposed algorithm’s efficiency for distributively solving nonconvex optimization problems.
Dr. Nikhil Chopra is a Professor in the Department of Mechanical Engineering at the University of Maryland, College Park. He received a Bachelor of Technology (Honors) degree in Mechanical Engineering from the Indian Institute of Technology, Kharagpur, India, in 2001, an M.S. degree in General Engineering in 2003, and a Ph.D. degree in Systems and Entrepreneurial Engineering in 2006 from the University of Illinois at Urbana-Champaign. His current research interests are in the areas of nonlinear control, robotics, and machine learning. He is the co-author of the book Passivity-Based Control and Estimation in Networked Robotics. He is currently an Associate Editor of Automatica and was previously an Associate Editor of IEEE Transactions on Control of Network Systems and IEEE Transactions on Automatic Control.
Multiphoton quantum interference underpins fundamental tests of quantum mechanics and quantum technologies, including applications in quantum computing, quantum sensing, and secure communication. In this talk, we will focus on the development of quantum networks based on multiphoton interference impacting the security and health of society.
We have shown both theoretically  and experimentally  how multiphoton interference with states of single or multiple photons enables the characterization of multiphoton networks and states, the generation of multipartite entanglement, and scales-up boson sampling demonstrations of quantum computational speed-up, with potential applications in quantum simulation and secure communication. This is possible even with non-identical photons by harnessing the full multiphoton quantum information stored in the photonic spectra by measurements able to resolve the photonic inner modes, such as frequency and time.
In particular, frequency and time-resolved measurements allow a precision enhancement in the estimation of the time delay or frequency differences between different photons . Applications range from the characterization of two-dimensional nanomaterials to the analysis of biological samples, including DNA and cell membranes .
We will also show how multiphoton interference of squeezed photons with homodyne measurements can be used to achieve Heisenberg limited precision in the estimation of a single unknown parameter or a function of unknown parameters distributed in an arbitrary linear network without the need of adaptive networks or any restrictions in the value of the parameters . Furthermore, we will describe the robustness to losses in the Heisenberg limited estimation of a linear superposition of unknown phases by using a single squeezed vacuum source and an anti-squeezing operation at a single interferometer output . This work can impact the development of new sensors for the estimation of both local and distributed parameters, including temperature, electromagnetic and gravitational fields.
With developments in quantum computers and algorithms, the public-key systems that we rely upon for secure network communication will become vulnerable to exploitation. Quantum-resistant key exchange protocols are needed to replace our existing vulnerable protocols. Much of the work has focused on developing new mathematical problems that are conjectured to be quantum-resistant as replacements for our current public-key algorithms. We took a different approach, looking to an old secret-key agreement protocol developed by Leighton and Micali at MIT for the Clipper Chip symmetric encryption system. We will present our analysis of the Leighton-Micali key agreement protocol, weaknesses we uncovered with the Cryptographic Protocol Shapes Analyzer (CPSA), and verification of a new protocol based on their ideas that corrects deficiencies in the original protocol.
Dr. Zieglar is an expert in protocol analysis and computer security at the National Security Agency. He is an adjunct faculty member at UMBC and a member of the UMBC Protocol Analysis Lab. Dr. Zieglar earned his Ph.D. in computer science from UMBC working under Dr. Sidhu. Email:
Host: Alan T. Sherman, . Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1 pm. All meetings are open to the public. Upcoming CDL Meetings: April 29, Ian Blumenfeld (UMBC), May 13, Enka Blanchard (Digitrust Loria, France).
Proposed by Burgers, Verdult, and Eekelen in 2013, the Session Binding Proxy (SBP) protocol intends to prevent session hijacking by binding the application session to the underlying network session (i.e., binding the session token to the SSL/TLS shared key). We present a formal methods analysis of SBP using the Cryptographic Protocol Shapes Analyzer (CPSA). Our analysis reveals that SBP relies critically on the successful establishment of a secure SSL/TLS channel, which can be undermined using well-known attacks. Also, we find that SBP allows for the partial hijacking of a session using a tailgating attack. In this attack, the adversary uses the server to inject and execute malicious code inside the client’s browser to extract the session token and forge a valid state-changing request to the server. This attack is not neutralized by SBP because the request contains a valid session token and is sent over the client’s existing SSL/TLS channel.
Host: Alan T. Sherman, . Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1 pm. All meetings are open to the public. Upcoming CDL Meetings: April 15, Edward Zieglar (NSA); April 29, Ian Blumenfeld (UMBC); May 13, Enka Blanchard (Digitrust Loria, France).
Traditionally most votes are lost in registration, voting ballot design problems, and polling place operations, causing long lines and possibly mail-in ballot fraud. This talk will describe voting-process problems and technological problems, and some possible solutions for voting improvement. Voting disenfranchisement is much higher for people with perceptual, physical, and cognitive disabilities. More than 14% of registered voters are in danger of increased errors due to dyslexia, and more than 6.5% due to short-term memory problems. We will describe and demonstrate technologies we are making that help disabled people and improve voting universally as well.
Dr. Ted Selker is an entrepreneur inventor who also mentors innovation. Ted spent five years as director of Considerate Systems research at Carnegie Mellon University Silicon Valley and in developing the campus’s research mission. Prior to that, Ted spent ten years as an associate professor at the MIT Media Laboratory, where he created the Context Aware Computing group, co-directed the Caltech/MIT Voting Technology Project, and directed the Industrial Design Intelligence future for product design project. Prior to that, his successes at targeted product creation and enhancement at IBM earned him the role of IBM Fellow. Ted’s work birthed successful products ranging from notebook computers to operating systems. It accumulated numerous awards, patents, and papers and has often been featured in the press. Ted is co-recipient of the Computer Science Policy Leader Award for Scientific American 50, the American Association for People with Disabilities Thomas Paine Award for his work on voting technology, and the Telluride Tech Fest Award.
Host: Alan T. Sherman, . Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1pm. All meetings are open to the public. Upcoming CDL Meetings: March 18, Nilanjan Banerjee (UMBC); April 1, Kirellos Elsaad (UMBC); April 15, Edward Zieglar (NSA); April 29, Ian Blumenfeld (UMBC); May 13, Enka Blanchard (Digitrust Loria, France)
As autonomous systems are fielded in unknown, dynamic, potentially contested conditions, they will need to operate with partial, uncertain information. Successful long-term deployments will need agents to reason about their energy logistics and require careful coordination between robots with vastly different energetics (e.g., air and ground platforms), which is especially challenging in the face of uncertainty. To make matters complicated, communication between the agents may not always be available. In this talk, I will present our ongoing ArtIAMAS work on risk-aware route planning and coordination algorithms that can reason about uncertainty in a provable fashion to enable long-term autonomous deployments.
Dr. Pratap Tokekar is an Assistant Professor in the Department of Computer Science and UMIACS at the University of Maryland. Between 2015 and 2019, he was an Assistant Professor at the Department of Electrical and Computer Engineering at Virginia Tech. Previously, he was a Postdoctoral Researcher at the GRASP lab of the University of Pennsylvania. He obtained his Ph.D. in Computer Science from the University of Minnesota in 2014 and Bachelor of Technology degree in Electronics and Telecommunication from the College of Engineering Pune, India in 2008. He is a recipient of the NSF CAREER award (2020) and CISE Research Initiation Initiative award (2016). He serves as an Associate Editor for the IEEE Transactions on Robotics, IEEE Transactions of Automation Science & Engineering, and the ICRA and IROS Conference Editorial Board.
To achieve the goal of building natural language processing systems that help real-world users with tasks, such systems need to be able to communicate bi-directionally with their users. This includes systems describing and explaining their solutions and their difficulties. This also involves systems that can learn from a wide variety of feedback — including language — that a user may provide. I’ll describe our past and ongoing work in this space, and how it ties into our current ArtIAMAS project on systems to help triage documents efficiently and effectively.
Dr. Hal Daumé III is a Perotto Professor in Computer Science and Language Science at the University of Maryland, College Park; he has a joint appointment as a Senior Principal Researcher at Microsoft Research, New York City. His primary research interest is in developing new learning algorithms for prototypical problems that arise in the context of natural language processing and artificial intelligence, with a focus on interactive learning and understanding and minimizing social harms that can be caused or exacerbated by computational systems.