Deep neural networks are fragile and their behaviors are often surprising. In many settings, we need to provide formal guarantees on the safety, security, correctness, or robustness of neural networks. This course covers foundational ideas from formal verification and their application to deep learning.
The couse is based on chapters from the instructor's ongoing book on the subject and research papers from machine learning and verification.
Tue chapter 1 / slides
Thu chapter 2
Tue chapter 3 / Hoare's classic paper / Floyd's paper / Turing's obscure paper
Thu chapter 4 / see also slides from Dillig's course
Tue chapter 5 / see also encodings of traditional programs
Thu cancelled
Tue chapter 5, continued / see Z3 tutorial
Thu chapter 6 / see abstract DPPL paper
Tue ✈️ Aws out of town for a PC meeting on Tuesday
Thu chapter 6 / see abstract DPPL paper
Tue chapter 7 / see paper from Dutertre & de Moura
Thu chapter 7 / see Reluplex paper
Tue chapter 8 / Lifting neural networks to operate on sets
Thu chapter 8 / Interval abstraction of neural operations
Tue chapter 9 / Zonotope abstraction / see DeepZ paper
Thu chapter 9 / Polyhedra abstraction / see Stars and DeepPoly papers
Tue tuning virtual lectures + asn 2 preview
Thu Joint abstraction / see k-ReLU paper
Tue Abstract training / see DiffAI and IBP papers (IBP is an easier read)
Thu Randomized smoothing + NLP / see Cohen et al. and Huang et al.
Tue Invariants and NNs / see Zhu et al.
Thu MDPs and verification / see Zhu et al.
Tue
Thu
Tue presentations
Thu presentations
Tue presentations
Thu presentations + farewell