Keynote Speakers

We are proud to announce the 2023 confirmed keynote speakers.

ARES KEYNOTES

Elisa Costante stands in front of white wall and smiles into the camera.

© Forescout Technologies Inc.

Dr. Elisa Costante
VP of Research at Forescout Technologies

Exploring the Cyber Threat Landscape: Analyzing the Past to Uncovering Future Trends
The Threat Landscape is Constantly Evolving – And So Must Security Solutions. Cyber-criminal organizations, new ransomware tactics, and the proliferation of IoT/OT/IoMT devices have rapidly changed the modern threat landscape. To devise effective security solutions, security practitioners and researchers must pay close attention to the continuously evolving cyber threats. In this presentation, I’ll explore the last 10 years of cyber threats and provide predictions for future trends. I’ll also identify opened research questions the industry is looking for that could help further enhance security measures and protect our networks against new and potential threats.

Elisa Costante is the VP of Threat Research at Forescout. In her role,  she leads the activities of Vedere Labs, a  team of cyber security researchers focused on vulnerability research, threat analysis and threat mitigation.  She has 10+ years of experience in the security challenges posed by the IT/OT/IoT convergence. In her prior role she was CTO at SecurityMatters, where she led product innovation activities in the field of network intrusion detection. Elisa holds a PhD in Cyber Security from the Eindhoven University of Technology where she specialized in machine learning techniques for data leakage detection.

Pierangela Samarati holding a lecture

© Pierangela Samarati

Pierangela Samarati
Università degli Studi di Milano, Italy

Data security and privacy in emerging scenarios
The rapid advancements in Information and Communication Technologies (ICTs) have been greatly changing our society, with clear societal and economic benefits. Mobile technology, Cloud, Big Data, Internet of things, services and technologies that are becoming more and more pervasive and conveniently accessible, towards to the realization of a ‘smart’ society’. At the heart of this evolution is the ability to collect, analyze, process and share an ever-increasing amount of data, to extract knowledge for offering personalized and advanced services.
A major concern, and potential obstacle, towards the full realization of such evolution is represented by security and privacy issues. As a matter of fact, the (actual or perceived) loss of control over data and potential compromise of their confidentiality can have a strong detrimental impact on the realization of an open framework for enabling collection, processing, and sharing of data, typically stored or processed by external cloud services. In this talk, I will illustrate some security and privacy issues arising in emerging scenarios, focusing in particular on the problem of managing data while guaranteeing confidentiality and integrity of data stored or processed by external providers.

Pierangela Samarati is a Professor at the Department of Computer Science of the Università degli Studi di Milano, Italy. Her main research interests are on data and applications security and privacy, especially in emerging scenarios. She has participated in several EU-funded projects involving different aspects of information protection, also serving as project coordinator. She has published more than 290 peer-reviewed articles in international journals, conference proceedings, and book chapters. She has been Computer Scientist in the Computer Science Laboratory at SRI, CA (USA). She has been a visiting researcher at the Computer Science Department of Stanford University, CA (USA), and at the Center for Secure Information Systems of George Mason University, VA (USA). She is the chair of the IEEE Systems Council Technical Committee on Security and Privacy in Complex Information Systems (TCSPCIS), of the ERCIM Security and Trust Management Working Group (STM), and of the ACM Workshop on Privacy in the Electronic Society (WPES). She is a member of several steering committees. She is IEEE Fellow (2012), ACM Fellow (2021), IFIP Fellow (2021). She has received the IEEE Computer Society Technical Achievement Award (2016) and the ESORICS Outstanding Research Award (2018).

CD-MAKE KEYNOTES

Michael Bronstein stands in front of grey wall and smile sin camera.

© Michael Bronstein

Michael Bronstein
University of Oxford, United Kingdom

Physics-inspired learning on graphs
The message-passing paradigm has been the “battle horse” of deep learning on graphs for several years, making graph neural networks a big success in a wide range of applications, from particle physics to protein design. From a theoretical viewpoint, it established the link to the Weisfeiler-Lehman hierarchy, allowing to analyse the expressive power of GNNs. We argue that the very “node-and-edge”-centric mindset of current graph deep learning schemes may hinder future progress in the field. As an alternative, we propose physics-inspired “continuous” learning models that open up a new trove of tools from the fields of differential geometry, algebraic topology, and differential equations so far largely unexplored in graph ML.

Michael Bronstein is the DeepMind Professor of AI at the University of Oxford and Head of Graph Learning Research at Twitter. He was previously a professor at Imperial College London and held visiting appointments at Stanford, MIT, and Harvard, and has also been affiliated with three Institutes for Advanced Study (at TUM as a Rudolf Diesel Fellow (2017-2019), at Harvard as a Radcliffe fellow (2017-2018), and at Princeton as a short-time scholar (2020)). Michael received his PhD from the Technion in 2007. He is the recipient of the Royal Society Wolfson Research Merit Award, Royal Academy of Engineering Silver Medal, five ERC grants, two Google Faculty Research Awards, and two Amazon AWS ML Research Awards. He is a Member of the Academia Europaea, Fellow of IEEE, IAPR, BCS, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, Michael is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019).

Mareille Hildebrandt sits in front of a door and smiles into the camera.

© Mireille Hildebrandt

Mireille Hildebrandt
Co-Director LSTS, Vrije Universiteit Brussel (VUB), Belgium

Whiteboxing machine learning
The deployment of AI systems based on machine learning (ML) in real world scenarios faces a number of challenges due to its black box nature. The GDPR right to an explanation has given rise to frantic attempts to develop and design self-explanatory systems, meant to help people understand their decisions and behaviour. In this keynote I will explain (pun intended) why meaningful explanations require keen attention to the proxies used in ML research design. Once those confronted with the decisions or behaviour of ML systems have a better understanding of the pragmatic choices that must be made to allow a machine to learn, it will become easier to foresee what ML systems can and cannot do. Whiteboxing ML should focus on the proxies that stand for real world events, actions and states of affairs, highlighting that a proxy (dataset, variable, model) is not what it stands for.

Mireille Hildebrandt is a Research Professor on ‘Interfacing Law and Technology’ at Vrije Universiteit Brussels (VUB), appointed by the VUB Research Council. She is co-Director of the Research Group on Law Science Technology and Society studies (LSTS) at the Faculty of Law and Criminology.
She also holds the part-time Chair of Smart Environments, Data Protection and the Rule of Law at the Science Faculty, at the Institute for Computing and Information Sciences (iCIS) at Radboud University Nijmegen.
​Her research interests concern the implications of automated decisions, machine learning and mindless artificial agency for law and the rule of law in constitutional democracies. ​Hildebrandt has published 5 scientific monographs, 23 edited volumes or special issues, and over 100 chapters and articles in scientific journals and volumes. She received an ERC Advanced Grant for her project on ‘Counting as a Human Being in the era of Computational Law’ (2019-2024), that funds COHUBICOL. In that context she is co-founder of the international peer reviewed Journal of Cross-Disciplinary Research in Computational Law, together with Laurence Diver (co-Editor in Chief is Frank Pasquale). In 2022 she has been elected as a Fellow of the British Academy (FBA).

WORKSHOP KEYNOTES

CUING

Martin Steinebach

© Martin Steinebach

Martin Steinebach
Fraunhofer Institute, Germany

Error Rates in Multimedia Forensics
The keynote will address the critical importance of false positive rates in multimedia forensics, a field dedicated to the identification, classification, and authentication of digital content. While the field has historically focused on true positives, this talk aims to highlight the importance of false positives and their impact on forensic investigations and other applications.
The talk will explore the causes of false positives, including limitations of forensic techniques, algorithmic biases, and the inherent complexity of multimedia analysis. It will emphasize the trade-off between false positives and false negatives, and the need for a balanced approach that is appropriate for a given application. The requirements can be very different between monitoring solutions such as upload filters or chat control on the one hand and individual analysis on the other. Steganalysis is another good example: error rates and their consequences depend heavily on the goals of a steganalysis application. While searching for occurrences of the use of simple steganographic tools may allow acceptable error rates, broad monitoring of state-of-the-art embedders with realistic payloads and usage frequencies seems to be at least challenging.
Overall, this talk aims to raise awareness of the importance of false positive rates in multimedia security and to inspire the audience to contribute to the advancement of reliable and fair forensic practices in an increasingly digital world.

Martin Steinebach heads the Media Security and IT Forensics department at the Fraunhofer Institute for Secure Information Technology SIT. He studied computer science at the TU Darmstadt from 1992 to 1999. In 2003, he received his PhD in Computer Science from the TU Darmstadt with the topic of digital audio watermarking. In March 2002, he became head of the MERIT department at Fraunhofer IPSI, which dealt with media data security, and of the C4M Competence Center for Media Security.
In 2007, following the dissolution of Fraunhofer IPSI, he moved to Fraunhofer SIT, where he first headed a group on media security and then became head of the Media Security and Forensics department in January 2010. Since November 2016, he has been an honorary professor at TU Darmstadt. Since 2019, Martin Steinebach has also been Principal Investigator at the National Research Center for Applied Cyber Security ATHENE, where he leads the research areas “Reliable and Verifiable Information through Secure Media (REVISE)” and “Security and Privacy in Artificial Intelligence (SenPAI)”. With his work on the ForBild project, Martin Steinebach and his colleagues won second place in the 2012 IT Security Award of the Horst Görtz Foundation. He leads numerous projects on IT forensics and media security for industry and the public sector. He is the author of more than 250 technical publications.

ENS

Michał Choraś

© Michał Choraś

Prof. Michał Choraś
Bydgoszcz University of Science and Technology, Poland

Trustworthy and Explainable AI (xAI) in Emerging Network Security Applications
AI solutions are widely used in a plethora of applications, including in the network security domain. However, in order to be fully adopted and accepted by societies, those solutions need to fulfill not only the requirements of effectiveness (e.g. of cyberattacks detection), but also those of trustworthy AI. In this talk, practical examples of trustworthy AI solutions in network security will be presented and discussed, in particular cybersecurity and fake news detection. The results of selected EU and national projects will be also shown (e.g. AI4Cyber, STARLIGHT, APPRAISE and SWAROG).

Michał Choraś currently holds a full professorship position with the Bydgoszcz University of Science and Technology, Poland, where he is the Head of the Teleinformatics Systems Division and the PATRAS Research Group. He was granted full professor title in December 2021. He is also affiliated with FernUniversität in Hagen, Germany, where he was a Project Coordinator for successful H2020 SIMARGL project (secure intelligent methods for advanced recognition of malware and stegomalware). He is also project coordinator/manager and security consultant.
He is the author of over 300 reviewed scientific publications. In 2021 and 2022, he was included in the Stanford List of Top 2% Scientists. His research interests include AI, machine learning, data science, and pattern recognition in several domains such as cyber security, fake news detection, anomaly detection, data correlation, biometrics, and critical infrastructures protection. He has been involved in more than twenty EU projects (e.g. APPRAISE, AI4CYBER, SPARTA, STARLIGHT, SocialTruth, CIPRNet, Q-Rapids, and InfraStress). His personal website is: http://michal-choras.com .

ETACS

Fabio di Franco

© Fabio di Franco

Fabio Di Franco
ENISA, EU

European Cybersecurity Skills Framework
The European Cybersecurity Skills Framework (ECSF) is tool for a common understanding of the cybersecurity professional role profiles in Europe and common mapping with the appropriate skills and competences required. It is an integral part of the Cybersecurity Skills Academy, which was recently announced by the European Commission, in order to define and assess skills, monitor the evolution of skill gaps and provide indications of emerging needs.

Dr. Fabio Di Franco is leading the activities in ENISA on cyber skills development for highly skilled people. He is the chair of the working group who developed and maintain the European Cybersecurity Skills Framework (ECSF). He is also responsible for developing and delivering trainings to EU member states and EU institutions on information security management and IT security.

Kendra Walther

© Kendra Walther

Kendra Walther
University of Southern California, USA

Cybersecurity Education for All
In modern society, surrounded by technology and applications of data analytics, all students need to be digitally fluent. Furthermore, as the pace of technological adoption into foundational aspects of our lives increases, digital fluency and cybersecurity are increasingly co-dependent.  Cybersecurity education for all means that learners need to develop cybersecurity awareness to protect themselves, their academic institutions, their employers, and in turn society at large. Additionally, because learners will need to continuously adapt to new threats or risks, both digital fluency and cybersecurity education require developing an attitude and disposition for lifelong learning and problem solving. As educators, we must collaborate to demand and lead the adoption of these initiatives.

Kendra Walther serves as Associate Director for Faculty Affairs and an Associate Professor in the Information Technology Program. Kendra has her bachelor’s degree in Computer Science from Harvey Mudd College, and a master’s degree in Computer Science from University of Maryland, College Park. She is currently pursuing an EdD in Educational Leadership from Rossier @ USC. She has worked for the Aerospace Corporation and taught Computer Science at Cal State LA, St. Albans School, and Milwaukee School of Engineering. Kendra is passionate about teaching and is constantly trying to find more ways to help her students understand the principles of programming.

IoT-SECFOR

Alessandro Brighente

© Alessandro Brighente

Dr. Alessandro Brighente
Department of Mathematics, University of Padova, Padua, Italy

Jumping Jams: Effective Jamming in Channel Hopping-Based IoT Networks
Channel hopping denotes the process of adaptively selecting a new communication channel in a given set when the currently used one undergoes significant quality degradation.  This strategy might help wireless networks to mitigate both friendly and malicious interference and hence guarantee effective communications. To make the hopping pattern as effective as possible, researchers developed many different strategies including reinforcement learning-based ones. In this talk, we will explore new attacking strategies, their implementation on a real-world testbed, and possible mitigation solutions.

Alessandro Brighente is an Assistant Professor at the University of Padova, Italy. He obtained his Ph.D. in Information Engineering from the University of Padova in 2021. He was visiting researcher at Nokia Bell Labs, Stuttgart, University of Washington, Seattle, and TU Delft, The Netherlands in 2019, 2022, and 2023 respectively. He served as TPC for several international conferences, including ESORICS, and WWW. He is the program chair for DevSecOpsRO, in conjunction with EuroS&P 2023. He has been a guest editor for IEEE Transactions in Industrial Informatics and for Elsevier’s Computers and Security. He is part of several industrial and research projects, including EU-funded ones. His current research interests include security and privacy in cyber-physical systems, wireless communications, the Internet of Things, and Blockchain.

IWAPS

Dimitris Tsolkas

© Dimitris Tsolkas

Dr. Dimitris Tsolkas
National and Kapodistrian University of Athens (NKUA), Greece, and Fogus Innovations & Services P.C, SME, Greece.

The 3GPP Common API framework (CAPIF) – open-source implementation and innovation potential
During the last decades, the use of Application Programming Interfaces (APIs) has served as a bridge between mobile operators and start-ups in emerging markets. Operators have begun to consider whether to open their APIs, starting from APIs related to mobile messaging, operator billing etc. In addition, the recently witnessed convergence of IT and Telecom worlds has contributed a lot to putting APIs in the epicenter of network programming and service provisioning. A representative example that proves this statement is the 5G Service Based Architecture (SBA), which has been designed based on the flexibility that HTTP/2 Restful APIs to provide interaction among 3GPP network functions. In this context, and in order to avoid duplication and inconsistency among the various API specifications that 3GPP has released, the specification of a common API framework (CAPIF) has been considered. In the framework of EVOLVED-5G project (https://evolved-5g.eu/), Fogus Innovation & Services P.C. and Telefonica Spain have developed and provide as an open-source product the Core Function of the CAPIF (namely the CCF), together with ready to use templates for compliant API service provide/consume entities. In the invited presentation, we will delve into the concept of network core openness through the exposure of CAPIF compliant APIs, and we will discuss the innovation potential that emerges by enabling a secure and interoperable interaction of third-party applications with network functions.

Dr. Dimitris Tsolkas holds a Ph.D. degree from the Department of Informatics and Telecommunications, National and Kapodistrian University of Athens (NKUA). He is currently a Senior Research Fellow at NKUA and he also leads research and development activities in Fogus Innovations & Services P.C, SME, Greece. He has long experience in Research & Development (R&D) as well as in project management, through his participation in a plethora of EC-funded projects. He has made vast contributions to the 5GPPP Technology Board (TB) and the 5GPPP/5GIA Working Groups (WGs), as well. His research record counts more than 60 articles in high quality journals, books, and conferences; while his current research interests target wireless networks and systems, with emphasis on architectural and resource management aspects in mobile communication networks.

IWCC

Kacper Gradon

© Kacper Gradon

Dr hab. Kacper Gradoń, Ph.D., D.Sc.
Warsaw University of Technology, Poland
University College London, United Kingdom

„What’s wrong with Wolfie? Generative Artificial Intelligence and its implications for (Cyber) Security”.

The presentation addresses the potential for the criminal abuse and hybrid-warfare weaponization of Generative Artificial Intelligence technologies. The focus is placed on the opportunities for the possible utilization of such tools by malign actors who engage in the orchestration and running of sophisticated scams (utilizing Open Source Intelligence, social engineering and text/voice/video impersonation), targeted phishing campaigns or who design, produce and propagate disinformation. The (cyber) security implications of technology are presented from the perspective of the Future Crimes and Crime Science disciplines. The presentation raises the questions about the ethical, moral and legal implications of similar technologies and opens the discussion on the responsibility of technology developers for the abuse of their products and on the topic of the IT industry governance.

Kacper Gradoń is an Associate Professor in Cybersecurity (Warsaw University of Technology), Honorary Senior Research Fellow and Department of Security and Crime Science (University College London) and Visiting Fulbright Professor at University of Colorado Boulder. He is also the World Health Organization Global Infodemic Manager. He is a double TED Speaker, expert in information warfare and human-centric dimensions of cybersecurity and frequent consultant of law enforcement agencies and intelligence institutions worldwide. He has spoken at over 200 conferences on all continents. Previously he was an Associate Professor and Director of the Centre for Forensic Sciences (University of Warsaw). He was also a civilian expert of the General Headquarters of the Polish National Police (where he was responsible for the creation of the criminal intelligence and analysis framework). He has published extensively on the issues of cybercrime, future crimes, Artificial Intelligence, hybrid warfare and criminal investigation.

SecIndustry

Sabine Delaitre

© Sabine Delaitre

Dr. Sabine Delaitre
The Wick, innovation unit of BOSONIT, Spain

DocExploit´s Cybersecurity Suite or how to be aware of the security level of your code to build and maintain a more secure one?
DocExploit team creates innovative and high-quality cybersecurity solutions to give a response to the increasing security needs of the digital transformation process and Industry 4.0. With DocSpot, DocDocker, and SirDocker tools, DocExploit team offers a complete suite that ensure the security of your enterprise applications and container environments. DocSpot detects vulnerabilities in application source code, Docdocker scans for vulnerabilities in containers and SirDocker manages and monitors containers efficiently and securely. Thus, to help to prevent cybersecurity attacks, DocExploit wants to improve the quality and security of software, with high accuracy by drastically reducing false positives, from the very base of its source code by developing a code analyzer based on graph technology, which allows for optimizing the detection of software vulnerabilities in the source code. In this talk, we will describe our technical approach, the different tools of the suite we are developing, and the possible contributions to the industry by fostering security automation and improving security in software and IoT applications.

Dr. Sabine Delaitre is a Computer Scientist with a Doctorate in the areas of Risk Management, Artificial Intelligence and Knowledge Management from the Ecole des Mines de Paris. She has 20+ years expertise on R&D projects. As Senior Innovation Expert, she currently aims at developing R&D projects focusing on Big Data, Advanced Analytics, AI/ML/FL, Semantics, Cybersecurity, lowCode, End-to-end IoT solutions in Industry4.0, Energy and Smart Cities.

Aditya Raj

© Aditya Raj

Aditya Raj
Technology Consultant – Distributed Ledger Technology/Blockchain, Fujitsu, Belgium

Decentralized Trust for Industry 4.0
In the era of Industry 4.0, where the integration of digital and physical systems is becoming increasingly prevalent, the challenge of establishing trust in decentralized systems is paramount. This keynote will delve into the role of blockchain technology in fostering decentralized trust, thereby enhancing cybersecurity in the context of Industry 4.0. We will explore real-world examples of how blockchain technology, including Hyperledger and Enterprise Ethereum, is being used to secure data, streamline processes, and ensure reliable transactions. The discussion will also look ahead to the future of blockchain and cybersecurity in Industry 4.0, highlighting the potential for further innovation and transformation.

Aditya Raj is a Senior Blockchain Consultant at Fujitsu Track and Trust Solution Center, where he leverages his deep understanding of blockchain and distributed ledger technology to guide customers through their digital transformation journeys. With over 12 years of industry experience, Aditya is a trusted advisor to both customers and colleagues, helping them navigate the complexities of blockchain technology and its potential applications. As a blockchain evangelist, he is passionate about exploring how this revolutionary technology can enhance trust, security, and efficiency in the era of Industry 4.0. Aditya’s expertise and forward-thinking approach make him a sought-after speaker and consultant in the field of blockchain technology.

SP2I

Jan Willemson

© Jan Willemson

Jan Willemson
Senior researcher, Cybernetica, Estonia

Privacy and verifiability trade-offs in voting systems
The methods of running democratic elections have evolved over the centuries together with the requirements. One of the paradoxes of voting is that these requirements are inherently contradictory. On one hand, we want to have full transparency and verifiability of the whole process by everyone, but on the other hand we want to keep the act of voting private to resist coercion attacks. It turns out that these properties can not be both achieved 100%, so some kind of a trade-off is required. In this talk we take a look at some of the possible equilibrium points and discuss their implications for practical voting systems.

Jan Willemson defended his PhD in computer science at Tartu University, Estonia, in 2002. He has been working at Cybernetica as a researcher since 1998, specializing in information security and cryptography. His areas of interest include risk analysis of heterogeneous systems, secure multi-party computations, e-government solutions and security aspects of Internet voting. He has authored more than 70 research papers published in international journals and conferences.

STAM

Martin Schneider

© Martin Schneider

Martin Schneider
Head of the Testing in the business unit Quality Engineering (SQC) at the Fraunhofer Institute for Open Communication Systems (Fraunhofer FOKUS), Germany

Challenges and Opportunities for Security Testing and Monitoring in the Light of the Cyber-Resilience Act
The Cyber-Resilience Act obliges the manufacturers of software to perform a comprehensive security evaluation. Security testing and monitoring plays a crucial role to meet the requirements arising from the CRA. In the light of these upcoming requirements, also the demands on security testing and monitoring will change, with respect to efficiency, reliability, and independence. In my talk, I will interpret the CRA in the context of security testing and monitoring and will present a solution that partially addresses them.

Martin Schneider is Head of the Testing in the business unit Quality Engineering (SQC) at the Fraunhofer Institute for Open Communication Systems (Fraunhofer FOKUS). His research focuses on security testing of both software systems and machine learning systems. He leads various research projects in these areas at Fraunhofer FOKUS and is active in standardization bodies such as DIN and ETSI. He is author of a primer for the application of fuzzing in the context of common criteria certification published by the German Federal Office for Information Security (BSI), trainer at the Fraunhofer Academy for Security Testing, and co-author of a book in this field.

WSDF

Ibrahim (Abe) Baggili

© Ibrahim Baggili

Dr. Ibrahim (Abe) Baggili
Louisiana State University, Baton Rouge, LA, USA

Who You Gonna Call? Unmasking AI Investigations through AI Forensics
Machine Learning (ML) and Artificial Intelligence (AI) have become inescapable forces, permeating every facet of our society, from business and academia to public and private sectors. However, AI failures are an undeniable reality that demands urgent attention from forensic researchers and practitioners. When AI embarks on mischievous endeavors, an important question arises: Who you gonna call? While AI/ML/<Insert Buzzword> are hailed as powerful tools to enhance digital forensics processing, it is imperative that we redirect our focus towards the forensics of AI. Join me in this keynote as we explore the emerging field of AI forensics, an essential sub-discipline within the realm of digital forensics. Through an overview of this evolving field and a spotlight on intriguing research problems, we will ignite understanding of the pressing need to address AI investigations.

Dr. Ibrahim (Abe) Baggili is a first generation Arab American. He is a Professor of Computer Science and Cybersecurity at Louisiana State University and the founder of the BiT Lab (Baggili Truth Lab) where he holds a joint appointment between the Division of Computer Science & Engineering and the Center for Computation and Technology. He has won numerous awards including the CT Civil Medal of Merit, the Medal of Thor from the Military Cyber Professional Association, CT 40 under 40, and is a fellow of the European Alliance for Innovation. Prior to that he was the director of the Connecticut Institute of Technology and Elder Family Endowed Chair of Computer Science & Cybersecurity at the Tagliatela College of Engineering at the University of New Haven. He received his BSc, MSc and PhD all from Purdue University where he worked as a researcher in CERIAS. Work with his students has uncovered vulnerabilities that impact over a billion people worldwide and has been featured in news and TV outlets in over 20 languages and he has published extensively in the domain of digital forensics. To learn more about the BiT Lab, you can visit https://csc.lsu.edu/~baggili .