BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//LBNL Physics Division Research Progress Meetings - ECPv6.8.3//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:LBNL Physics Division Research Progress Meetings
X-ORIGINAL-URL:https://rpm.physics.lbl.gov
X-WR-CALDESC:Events for LBNL Physics Division Research Progress Meetings
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20180311T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20181104T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20180529T160000
DTEND;TZID=America/Los_Angeles:20180529T170000
DTSTAMP:20260413T193316
CREATED:20180514T084613Z
LAST-MODIFIED:20180514T084613Z
UID:851-1527609600-1527613200@rpm.physics.lbl.gov
SUMMARY:Nicholas Carlini (UCB) "Adversarial Machine Learning"
DESCRIPTION:Abstract:\nMany fundamental properties of neural networks are still not well\nunderstood. This talk studies two of these from an adversarial perspective.\nI begin with my main line of research and examine the apparently-fundamental\nsusceptibility of neural networks to adversarial examples. I develop effective\nalgorithms for generating adversarial examples and find that most most training\nregimes are ineffective at increasing robustness. Then\, I perform a brief\nexamination of neural network memorization\, and demonstrate that training\ndata can be efficiently extracted from a trained model given only black-box\naccess to that model. I conclude with directions for future research.
URL:https://rpm.physics.lbl.gov/event/nicholas-carlini-ucb-adversarial-machine-learning/
LOCATION:HYBRID 50A-5132 (Sessler Conference Room)\, https://lbnl.zoom.us/j/91782268585\, 50A-5132
END:VEVENT
END:VCALENDAR