spml2024

Security and Privacy in Machine Learning
Sharif University of Technology, Iran
CE Department
Fall 2024

   

Welcome to the public page for the course on Security and Privacy in Machine Learning (SPML). The main objectives of the course are to introduce students to the principles of security and privacy in machine learning. The students become familiar with the vulnerabilities of machine learning in the training and inference phases and the methods to improve the robustness and privacy of machine learning models.

Course Logistics

Instructor

   Amir Mahdi Sadeghzadeh
   Office: CE-704    Lab: CE-502    Office Hours: By appointment (through Email)
   Email: amsadeghzadeh_at_gmail.com
   URL: amsadeghzadeh.github.io

Course Staff

Course Pages

Main References

The main references for the course are many research papers in top-tier conferences and journals in computer security (SP, CCS, Usenix Security, EuroSP) and machine learning (NeurIPS, ICLR, ICML, CVPR, ECCV). Three following books are used for presenting background topics in machine learning and deep learning in the first part of the course.

Grading Policy

Assignments (30%), Mid-term (and Mini-exam) (20%), Papers review and presentation(20%), and Final (30%).

Course Policy

Academic Honesty

Sharif CE Department Honor Code (please read it carefully!)

Homework Submission

Submit your answers in .pdf or .zip file in course page on Quera website, with the following format: HW[HW#]-[FamilyName]-[std#] (For example HW3-Hoseini-401234567)

Late Policy

   

   

# Date Topic Content Lecture Reading HWs
1 7/1 Course Intro. The scope and contents of the course Lec1 Towards the Science of Security and Privacy in Machine Learning  
2 7/3 Deep Learning Review ML Intro., Perceptron, Logistic regression, GD, Regularization Lec2 Pattern Recognition and Machine Learning Ch.1 & Ch.4
Deep Learning Ch.5 & Ch.6
 
3 7/8 Deep Learning Review Softmax Classifier, Neural networks Lec3 Deep Learning Ch.6, ch.8 & Ch.9
The Neural Network, A Visual Introduction
Why are neural networks so effective?
 
4 7/10 Deep Learning Review Forward and backward propagation, Convolutional Neural Networks (CNNs) Lec4 Deep Learning Ch.6 & ch.9
Dive into Deep Learning Ch. 8
Backpropagation for a Linear Layer
What is backpropagation really doing?
 
5 7/15 Adversarial Examples AE Generating Methods Lec5 Intriguing Properties of Neural Networks  
6 7/17 Adversarial Examples AE Generating Methods Lec6 Explaining and Harnessing Adversarial Examples  
7 7/22 Adversarial Examples AE Generating Methods Lec7 Towards Evaluating the Robustness of Neural Networks  
8 7/24 Adversarial Examples AE Generating Methods Lec8 Universal Adversarial Perturbations
Adversarial Patch
 
9 8/29 Adversarial Examples Defenses Against AEs Lec9 Towards Deep Learning Models Resistant to Adversarial Attacks
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
 
10 8/1          
11 8/6 Adversarial Examples Defenses Against AEs Lec10 Certified Adversarial Robustness via Randomized Smoothing
Provably robust deep learning via adversarially trained smoothed classifiers
 
12 8/8 Adversarial Examples Defenses Against AEs Lec11 Certified Adversarial Robustness via Randomized Smoothing
Provably robust deep learning via adversarially trained smoothed classifiers
Practical Black-Box Attacks against Machine Learning
 
13 8/13 Adversarial Examples Black-box AEs Lec12 ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Black-box Adversarial Attacks with Limited Queries and Information
 
14 8/15 Adversarial Examples Black-box AEs - Data Poisoning Lec13 Black-box Adversarial Attacks with Limited Queries and Information
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Clean-Label Backdoor Attacks
 
15 8/20 Poisoning Poisoning Lec14 Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Deep Partition Aggregation: Provable Defense against General Poisoning Attacks