Export to
Monday, March 21, 2016 at 11:47am.
Galois tech talk: Adversarial Machine Learning, Privacy, and Cybersecurity in the Age of Data Science
Website
Description
abstract:
Due to the exponential growth of our ability to collect, centralize, and share data in recent years we have been able tackle problems previously assumed to be insurmountable. Ubiquitous sensors, fast and efficient machine learning, and affordable commercial-off-the-shelf technologies have not only deepened our understanding of our world, but also democratized these capabilities. As a direct result of this shift, we are facing a rapidly evolving host of challenges centered around our new data-driven world. In this talk we will discuss efforts from the Trustworthy Data Engineering Laboratory (TRUST Lab) in conjunction with our partners at the World Bank, the Federal Bureau of Investigation (FBI), the Environmental Protection Agency (EPA), and the City of Cincinnati to identify and solve problems in Adversarial Machine Learning and Data Science. We will examine real case studies in debarrment and corruption in international procurement with the World Bank, cases of violations of the Resource Conservation and Recovery Act with the EPA, and human rights abuses of low income citizens by corporate slum-lords in the city of Cincinnati. In each of these cases we will show how malicious actors manipulated the data collection and data analytics process either through misinformation, abuse of regional corporate legal structures, collusion with state actors, or knowledge of underlying predictive analytics algorithms to damage the integrity of data used by machine learning and predictive analytic processes, or the outcomes derived from these processes, to avoid regulatory oversite, sanctions, and investigations launched by national and multi-national authorities. This new type of attack is growing increasingly common, and we will motivate and encourage increased research on counter measures and safe guards in information systems.
Additionally we will discuss our efforts to combat problems in data privacy and availability in modern systems, highlighting the tradeoffs involved between data availability and privacy, and introduce a new formal logic for privacy preserving data operations, and demonstrate their performability and correctness, along with metrics for their improved privacy and suitability for high-assurance areas of data science.
bio:
Dr. Eric Rozier is an Assistant Professor of Electrical Engineering and Computing Systems and head of the Trustworthy Data Engineering Laboratory at the University of Cincinnati in Cincinnati, Ohio. He has previously been named a Frontier’s of Engineering Education Faculty member by the National Academy of Engineering, a two time Eric and Wendy Schmidt Data Science for Social Good Faculty Fellow at the University of Chicago, and an IBM Research Fellow. Dr. Rozier’s research interests revolve around the intersection of Data Science and Engineering with Cybersecurity, Reliability, and Performability Engineering, with a focus on dependable computing for critial infrastructures. His work in Adversarial Machine Learning was recently featured as one of the inaugural talks for the USENIX Engima conference on emerging threats and novel attacks. Before joining the University of Cincinnati, Dr. Rozier was the founding director of the Fortinet Cybersecurity Laboratory at the University of Miami where he worked to develop and commercialize new technologies in homomorphic encryption for cloud-based systems. He earned his Ph.D. from the University of Illinois at Urbana-Champaign where he worked on applications in fault-tolerance and security with the National Center for Supercomputing Applications, and the Information Trust Institute.