Machine Unlearning in 2024
41 minute read
Written by Ken Liu ∙ May 2024
As our ML models today become larger and their (pre-)training sets grow to inscrutable sizes, people are increasingly interested in the concept of machine unlearning to edit away undesired things like private data, stale knowledge, copyrighted materials, toxic/unsafe content, dangerous capabilities, and misinformation, without retraining models from scratch.
Machine unlearning can be broadly described as removing the influences of training data from a...
Read more at ai.stanford.edu