Prof. Zhao is a Neubauer Professor of Computer Science at University of Chicago. Over the years, He has worked on a number of areas from P2P networks, online social networks, cognitive radios/dynamic spectrum, graph mining and modeling, user behavior analysis. Since 2016, he has focused on security and privacy in machine learning and wearable systems. Since 2022, he works primarily on adversarial machine learning and tools to mitigate harms of generative AI models against human creatives in different industries. His primary research venues are CCS/Oakland/USENIX Security. In the past, he published at a range of top conferences, including NeurIPS/CVPR, IMC/WWW, CHI/CSCW, and SIGCOMM/NSDI/Mobicom.
https://people.cs.uchicago.edu/~ravenben/
Generative AI models are adept at producing images that mimic visual art created by human artists. Beyond mimicking individual artists and their styles, text to image diffusion models are often used to commit fraud against individuals and commercial entities interested in licensing or purchasing human art. In this talk, I will discuss the challenges of distinguishing generative AI images from visual art produced by human artists, and why it is an important problem to solve for both human artists and AI model trainers. I will present our recent results from a large experimental study evaluating the practical efficacy of different genAI image detectors, including supervised classifiers, diffusion-specific detectors, and humans (via a user study involving more than 4000 artists). We find that there are no ideal solutions, and perhaps a hybrid of artists and ML models are our best hope moving forward.
Dr. Chaowei Xiao is an Assistant Professor at the University of Wisconsin, Madison, and a research scientist at NVIDIA Research. He is currently very interested in exploring the safety and security problem in (Multimodal) Large Language Models and systems, as well as studying the role of LLMs in different application domains. He has received multiple Best Paper Awards at top-tier security and system conferences such as USENIX Security, MobiCom, and ESWN, along with the ACM Gordon Bell Special Prize for COVID and the Amazon Faculty Award. His research work has been featured in multiple media outlets, including Nature, Wired, Fortune, and The New York Times. One of Dr. Xiao's research outputs is also on display at the London Science Museum.
https://xiaocw11.github.io/
In recent years, Large Language Models (LLMs) have garnered significant attention for their extraordinary ability to comprehend and process a wide range of textual information. Despite their vast potential, they are still facing safety challenges, hindering their practical applications. In this talk, our journey starts from exploring two safety challenges of existing LLMs: jailbreak attack and prompt injection attacks. I will introduce the principles for red-teaming LLMs by automatically generating jailbreak and prompt injection threats. Following this, I will then discuss mitigation strategies that can be employed to defend against such attacks, ranging from the alignment, inference stage and system stage.
The following times are on local time zone.
9:20–9:30 | Opening Remarks |
9:30–10:30 | Keynote Speech 1: Ben Zhao (Professor, University of Chicago) |
10:30–11:00 | Morning Coffee Break |
11:00–11:30 | Session I: Cybersecurity Threat Intelligence |
11:00: ThreatKG: An AI-Powered System for Automated Online Threat Intelligence Peng Gao (Virginia Tech), Xiaoyuan Liu (University of California, Berkeley), Edward Choi (University of California, Berkeley), Sibo Ma (University of California, Berkeley), Xinyu Yang (Virginia Tech), and Dawn Song (University of California, Berkeley) |
|
11:10: Mitigating Unauthorized Speech Synthesis for Voice-Activated Systems Zhisheng Zhang (Beijing University of Posts and Telecommunications), Qianyi Yang (Beijing University of Posts and Telecommunications), Derui Wang (CSIRO's Data61), Pengyang Huang (Beijing University of Posts and Telecommunications), Yuxin Cao (National University of Singapore), Kai Ye (The University of Hong Kong), and Jie Hao (Beijing University of Posts and Telecommunications) |
|
11:20: How to Efficiently Manage Critical Infrastructure Vulnerabilities? Toward Large Code-graph Models Hongying Zhang (Shanghai Jiao Tong University), Gaolei Li (Shanghai Jiao Tong University), Shenghong Li (Shanghai Jiao Tong University), Hongfu Liu (Shanghai Jiao Tong University), Shuo Wang (Shanghai Jiao Tong University), and Jianhua Li (Shanghai Jiao Tong University) |
|
11:30–12:00 | Session II: Adversarial Attacks and Robustness |
11:30: Adversarial Attacks to Multi-Modal Models Zhihao Dou (Duke University), Xin Hu (The University of Tokyo), Haibo Yang (Rochester Institute of Technology), Zhuqing Liu (The Ohio State University), and Minghong Fang (Duke University) |
|
11:40: TrojFair: Trojan Fairness Attacks Jiaqi Xue (University of Central Florida), Mengxin Zheng (University of Central Florida), Yi Sheng (George Mason University), Lei Yang (George Mason University), Qian Lou (University of Central Florida), and Lei Jiang (Indiana University Bloomington) |
|
11:50: PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts Kaijie Zhu (Institute of Automation, Chinese Academy of Sciences), Jindong Wang (Microsoft Research), Jiaheng Zhou (Institute of Automation, Chinese Academy of Sciences), Zichen Wang (Institute of Automation, Chinese Academy of Sciences), Hao Chen (Carnegie Mellon University), Yidong Wang (Peking University), Linyi Yang (Westlake University), Wei Ye (Peking University), Yue Zhang (Westlake University), Neil Gong (Duke University), and Xing Xie (Microsoft) |
|
12:00–14:00 | Lunch |
14:00–15:00 | Keynote Speech 2: Chaowei Xiao (NVIDIA and University of Wisconsin, Madison) |
15:00–15:30 | Afternoon Coffee Break |
15:30–16:00 | Session III: Large Language Model Security |
15:30: Have You Merged My Model? On The Robustness of Merged Machine Learning Models Tianshuo Cong (Tsinghua University), Delong Ran (Tsinghua University), Zesen Liu (Xidian University), Xinlei He (The Hong Kong University of Science and Technology (Guangzhou)), Jinyuan Liu (Tsinghua University), Yichen Gong (Tsinghua University), Qi Li (Tsinghua University), Anyu Wang (Tsinghua University), and Xiaoyun Wang (Tsinghua University) |
|
15:40: "Prompter Says": A Linguistic Approach to Understanding and Detecting Jailbreak Attacks Against Large-Language Models Dylan Lee (University of California, Irvine), Shaoyuan Xie (University of California, Irvine), Shagoto Rahman (University of California, Irvine), Kenneth Pat (University of California, Irvine), David Lee (University of California, Irvine), and Qi Alfred Chen (University of California, Irvine) |
|
15:50: Towards Large Language Model (LLM) Forensics Using Feature Extraction Maxim Chernyshev (Deakin University), Zubair Baig (Deakin University), and Robin Ram Mohan Doss (Deakin University) |
|
16:00–16:20 | Session IV: Secure Learning and Model Attribution |
16:00: CryptoTrain: Fast Secure Training on Encrypted Data Jiaqi Xue (University of Central Florida), Yancheng Zhang (University of Central Florida), Yanshan Wang (University of Pittsburgh), Xueqiang Wang (University of Central Florida), Hao Zheng (University of Central Florida), and Qian Lou (University of Central Florida) |
|
16:10: Detection and Attribution of Diffusion Model of Character Animation Based on Spatio-Temporal Attention Fazhong Liu (Shanghai Jiao Tong University), Yan Meng (Shanghai Jiao Tong University), Tian Dong (Shanghai Jiao Tong University), Guoxing Chen (Shanghai Jiao Tong University), and Haojin Zhu (Shanghai Jiao Tong University) |
|
16:20–16:30 | Concluding Remarks |
As Large AI Systems and Models (LAMs) become increasingly pivotal in a wide array of applications, their potential impact on the privacy and cybersecurity of critical infrastructure becomes a pressing concern. LAMPS is dedicated to addressing these unique challenges, fostering a dialogue on the latest advancements and ethical considerations in enhancing the privacy and cybersecurity of LAMs, particularly in the context of critical infrastructure protection.
LAMPS will bring together global experts to dissect the nuanced privacy and cybersecurity challenges posed by LAMs, especially in critical infrastructure sectors. This workshop will serve as a platform to unveil novel techniques, share best practices, and chart the course for future research, with a special emphasis on the delicate balance between advancing AI technologies and securing critical digital and physical systems.
Topics of interest include (but are not limited to):
Secure Large AI Systems and Models for Critical Infrastructure
Large AI Systems and Models' Privacy and Security Vulnerabilities
Data Anonymization and Synthetic Data
Human-Centric Large AI Systems and Models
Submitted papers must not substantially overlap with papers that have been published or simultaneously submitted to a journal or a conference with proceedings. Short submissions should be at most 4 pages in the ACM double-column format. Submissions should be at most 10 pages in the ACM double-column format, excluding well-marked appendices, and at most 12 pages in total. Systematization of knowledge (SoK) submissions could be at most 15 pages long, excluding well-marked appendices, and at most 17 pages. Submissions are not required to be anonymized.
Submission link: https://ccs24-lamps.hotcrp.com
Only PDF files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. Authors of accepted papers must guarantee that one of the authors will register and present the paper at the workshop. Proceedings of the workshop will be available on a CD to the workshop attendees and will become part of the ACM Digital Library.
The archival papers will be included in the workshop proceedings. Due to time constraints, accepted papers will be selected for presentation as either talk or poster based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance.
Authors are responsible for obtaining appropriate publication clearances. Attendance and presentation by at least one author of each accepted paper at the workshop are mandatory for the paper to be included in the proceedings.
For any questions, please contact one of the workshop organizers at jason.xue@data61.csiro.au or wangshuosj@sjtu.edu.cn .
Chong | Xiang | Princeton University | United States of America |
Derui | Wang | CSIRO's Data61 | Australia |
Giovanni | Apruzzese | University of Liechtenstein | Liechtenstein |
Jamie | Hayes | Google Deepmind | United Kingdom |
Jinyuan | Jia | The Pennsylvania State University | United States of America |
Konrad | Rieck | TU Berlin | Germany |
Kristen | Moore | CSIRO's Data61 | Australia |
Mainack | Mondal | Indian Institute of Technology, Kharagpur | India |
Mathias | Humbert | University of Lausanne | Switzerland |
Minghong | Fang | Duke University | United States of America |
Peng | Gao | Virginia Tech | United States of America |
Pin-Yu | Chen | IBM Research | United States of America |
Sagar | Samtani | Indiana University | United States of America |
Sai Teja | Peddinti | United States of America | |
Shiqing | Ma | University of Massachusetts Amherst | United States of America |
Shuang | Hao | University of Texas at Dallas | United States of America |
Stjepan | Picek | Radboud University | Netherlands |
Tian | Dong | Shanghai Jiao Tong University | China |
Tianshuo | Cong | Tsinghua University | China |
Torsten | Krauß | University of Wuerzburg | Germany |
Varun | Chandrasekaran | University of Illinois Urbana-Champaign | United States of America |
Xiaoning | Du | Monash University | Australia |
Xinlei | He | The Hong Kong University of Science and Technology (Guangzhou) | China |
Yanjiao | Chen | Zhejiang University | China |
Yinzhi | Cao | Johns Hopkins University | United States of America |