09:00 - 09:10 | Welcome |
---|---|
09:10 - 10:00 | Keynote: Active defense and deception in modern industrial control systems Michail Maniatakos, NYU Abu Dhabi Abstract: Recent years have been pivotal in the field of Industrial Control Systems (ICS) security, with a large number of high-profile attacks exposing the lack of a design-for-security initiative in ICS. The evolution of ICS abstracting the control logic to a purely software level hosted on a generic OS, combined with hyperconnectivity and the integration of popular open source libraries providing advanced features, have expanded the ICS attack surface by increasing the entry points and by allowing traditional software vulnerabilities to be repurposed to the ICS domain. In this seminar, we will shed light on the security landscape of modern ICS, investigating active defense techniques for the supply chain problem of modern ICS firmware and motivating the need of employing appropriate vulnerability assessment tools. We will present methodologies for blackbox fuzzing of modern ICS and hotpatching, as well as the use of Large Language Models for ICS honeypot development. Bio: Michail (Mihalis) Maniatakos is an Associate Professor of Electrical and Computer Engineering at New York University (NYU) Abu Dhabi, UAE, and a Research Associate Professor at the NYU Tandon School of Engineering, New York, USA. He is the Director of the MoMA Laboratory (nyuad.nyu.edu/momalab), NYU Abu Dhabi. He received his Ph.D. in Electrical Engineering, as well as M.Sc., M.Phil. degrees from Yale University. He also received the B.Sc. and M.Sc. degrees in Computer Science and Embedded Systems, respectively, from the University of Piraeus, Greece. His research interests, funded by industrial partners, the US government, and the UAE government include privacy-preserving computation and industrial control systems security. |
10:00 - 11:00 | Cyber Deception, Honeypots “Koney: A Cyber Deception Orchestration Framework for Kubernetes” M. Kahlhofer, M. Golinelli, S. Rass Abstract: System operators responsible for protecting software applications remain hesitant to implement cyber deception technology, including methods that place traps to catch attackers, despite its proven benefits. Overcoming their concerns removes a barrier that currently hinders industry adoption of deception technology. Our work introduces deception policy documents to describe deception technology “as code” and pairs them with Koney, a Kubernetes operator, which facilitates the setup, rotation, monitoring, and removal of traps in Kubernetes. We leverage cloud-native technologies, such as service meshes and eBPF, to automatically add traps to containerized software applications, without having access to the source code. We focus specifically on operational properties, such as maintainability, scalability, and simplicity, which we consider essential to accelerate the adoption of cyber deception technology and to facilitate further research on cyber deception. “VelLMes: A high-interaction AI-based deception framework” M. Sladiƒá, V. Valeros, C. Catania, S. Garcia Abstract: There are very few SotA deception systems based on Large Language Models. The existing ones are limited only to simulating one type of service, mainly SSH shells. These systems - but also the deception technologies not based on LLMs - lack an extensive evaluation that includes human attackers. Generative AI has recently become a valuable asset for cybersecurity researchers and practitioners, and the field of cyber-deception is no exception. Researchers have demonstrated how LLMs can be leveraged to create realistic-looking honeytokens, fake users, and even simulated systems that can be used as honeypots. This paper presents an AI-based deception framework called VelLMes, which can simulate multiple protocols and services such as SSH Linux shell, MySQL, POP3, and HTTP. All of these can be deployed and used as honeypots, thus VelLMes offers a variety of choices for deception design based on the users’ needs. VelLMes is designed to be attacked by humans, so interactivity and realism are key for its performance. We evaluate the generative capabilities and the deception capabilities. Generative capabilities were evaluated using unit tests for LLMs. The results of the unit tests show that, with careful prompting, LLMs can produce realistic-looking responses, with some LLMs having a 100% passing rate. In the case of the SSH Linux shell, we evaluated deception capabilities with 89 human attackers. The attackers interacted with a randomly assigned shell (either honeypot or real) and had to decide if it was a real Ubuntu system or a honeypot. The results showed that about 30% of the attackers thought that they were interacting with a real system when they were assigned an LLM-based honeypot. Lastly, we deployed 10 instances of the SSH Linux shell honeypot on the Internet to capture real-life attacks. Analysis of these attacks showed us that LLM honeypots simulating Linux shells can perform well against unstructured and unexpected attacks on the Internet, responding correctly to most of the issued commands. |
11:00 - 11:30 | Coffee Break |
11:30 - 12:00 | Misc I “Experience Paper: Uncovering a Privileged Insider Threat in Action” R. Bhaskaran, R. Gopalakrishna Abstract: Insider threats pose a persistent and significant challenge to corporate security. The threat escalates dramatically when the malicious insider is a member of the IT or cybersecurity team. Their inherent knowledge, access and privileges makes it extremely difficult to prevent, detect or deter malicious activity performed by insiders who operate inside the security team. Traditional security controls such as User Entity Behaviour analytics (UEBA), Data Loss Prevention (DLP), Privileged Access Management (PAM), Identity Threat Detection and Response (ITDR), Endpoint Detect and Response (EDR), Network Traffic Analysis (NTA) are designed to detect External Threats. While monitoring behavioural and psychological indicators, such as employee dissatisfaction or financial pressures, can be useful for Insider Threat detection, these methods are often insufficient when the insider resides within the IT or cybersecurity team. Privileged Insiders can use their knowledge of the security tools deployed, its configurations and whitelists to evade defenses. Deception technology offers a unique capability to detect insider threats, even those originating from privileged teams within IT or cybersecurity. This cyber deception practitioner’s paper uses a real-world case study to describe how deception was the only tool that detected a malicious insider working in the security team of a large enterprise operating in a highly regulated industry. This paper provides details on the key deception design principles that enabled the Defense team to catch the malicious privileged insider. |
12:00 - 13:00 | Decision Making and Biases I “A Case Study on the Use of Representativeness Bias as a Defense Against Adversarial Cyber Threats” B. Hitaj, G. Denker, L. Tinnel, J. Lawson, B. DeBruhl, G. McCain, D. Starink, M. McAnally, D. Aaron, N. Bunting, A. Fafard, R. Roberts Abstract: Cyberspace is an ever-evolving battleground involving adversaries seeking to circumvent existing safeguards and defenders aiming to stay one step ahead by predicting and mitigating the next threat. Existing mitigation strategies have focused primarily on solutions that consider software or hardware aspects, often ignoring the human factor. This paper takes a first step towards psychology-informed, active defense strategies, where we target biases that human beings are susceptible to under conditions of uncertainty. Using capture-the-flag events, we create realistic challenges that tap into a particular cognitive bias: representativeness. This study finds that this bias can be triggered to thwart hacking attempts and divert hackers into non-vulnerable attack paths. Participants were exposed to two different challenges designed to exploit representativeness biases. One of the representativeness challenges significantly thwarted attackers away from vulnerable attack vectors and onto non-vulnerable paths, signifying an effective bias-based defense mechanism. This work paves the way towards cyber defense strategies that leverage additional human biases to thwart future, sophisticated adversarial attacks. “Towards bio-inspired cyber-deception: a case study of SSH and Telnet honeypots” A. Safargalieva, E. Vasilomanolakis Abstract: Cyber-deception is a well-established yet rapidly evolving field within cybersecurity that focuses on deceiving attackers to protect networks and systems. Despite its potential, the field faces several challenges that impede its advancement. Existing deception mechanisms often suffer from fundamental design flaws. Additionally, evaluating the effectiveness of these mechanisms remains a significant challenge. In this paper, we propose to address the limitations and weaknesses of the existing honeypot systems by incorporating bio-inspired deceptive approaches: camouflage, bluffing and playing dead. We evaluate the effectiveness of three such strategies by deploying 10 instances of Cowrie honeypots over a two-week period, capturing a total of 470,302 SSH and 40,867 Telnet login attempts from 8,874 unique IP addresses. Our analysis looks at the impact of bio-inspired features on session duration and the speed with which attackers leave the honeypots. The results reveal that modifications to baseline SSH honeypots encourage longer attacker engagement, whereas for Telnet, attackers tend to exit faster. These findings suggest that bio-inspired modifications can influence attacker behavior and enhance the overall efficacy of cyber-deception strategies. |
13:00 - 14:00 | Lunch |
14:00 - 14:30 | Misc II “Detection of Reverse Engineering Activities Before the Attack” P. Falcarin, M. Venerba, M. De Giorgi, F. Sarro Abstract: In typical cybersecurity scenarios, one aims at detecting attacks after the fact: in this work, we aim at applying an active defence, by detecting activities of attackers trying to analyse and reverse engineer the code of an Android app, before they will be able to perform an attack by tampering with the application code. We instrumented an app to collect various runtime data before and after deployment, in normal behaviour and under malicious analysis. We introduce the concept of partial execution paths as subsets of a program trace suddenly interrupted, as possible indicators of debugging activities. Such clues, along with system calls sequences and delays between them, stack information, and sensors data, are all data that are collected to help our system in deciding whether our app is under analysis and its device has to be considered compromised. |
14:30 - 15:30 | Decision Making and Biases II “Cognitive Biases in Cyber Attacker Decision-Making: Translating Behavioral Insights into Cybersecurity” A. Aggarwal, M. Ferreira, P. Aggarwal, P. Rajivan, C. Gonzalez Abstract: Cognitive biases shape human decision making, yet their role in cybersecurity remains under-explored. Notably, little research examines whether cyber attackers succumb to cognitive biases—an insight that could inform defense strategies. This study addresses this gap by testing whether traditional cognitive biases replicate in a cybersecurity context. We develop cybersecurity-specific versions of three biases: Availability (Frequency and Recall), Recency, and Loss Aversion (Endowment, Gain, and Loss Framing). A within-subjects experiment compared responses to bias problems in cybersecurity and non-security contexts. The results indicate a successful translation of Frequency, Recency, Endowment, and Loss Framing biases to cybersecurity. However, Recall and Gain Framing biases did not translate effectively. Interestingly, many participants who exhibited bias in the general context did not in the cybersecurity context. Individual-level analyses suggest that participants engage in more deliberate reasoning when faced with cybersecurity-related problems, whereas they rely more on intuition in general scenarios. These findings underscore the nuances of designing cybersecurity scenarios that use cognitive biases but suggest that designing cyber defenses according to cognitive biases can be a successful strategy. “Position Paper: The Extended Pyramid of Pain: The Attacker’s Nightmare Edition” R. Gopalakrishna, R. Bhaskaran Abstract: Since its introduction in 2013, the Pyramid of Pain has been a fundamental framework for defenders to prevent, detect, and respond to cyber threats. However, as the cyber adversaries continue to evolve, they increasingly adopt stealthy techniques, such as defense evasion and credential theft, rendering traditional cyber defenses less effective. This paper introduces The Extended Pyramid of Pain that moves beyond traditional Tactics, Techniques and Procedures (TTPs) to incorporate preemptive and proactive disruption strategies including targeting the humans behind modern cyber-attacks, dismantling attacker infrastructure, disrupting the attacker economic model and leveraging Psychological Operations (PSYOPs) to introduce uncertainty, fear, and hesitation in the minds of humans behind modern cyber-attacks. By escalating the cost, risk and complexity for attackers, The Extended Pyramid of Pain shifts the advantage to defenders, making cyber-attacks a less viable and more costly endeavor. |
15:30 - 15:40 | Closing Remarks |
16:00 - 16:30 | Coffee Break |