Making AI models ‘forget’ undesirable data hurts their performance

So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But current unlearning techniques are a double-edged sword: They could make a model like OpenAI’s GPT-4o or Meta’s Llama 3.1 405B much less capable of answering […]

© 2024 TechCrunch. All rights reserved. For personal use only.



Making AI models ‘forget’ undesirable data hurts their performance

Making AI models ‘forget’ undesirable data hurts their performance

Making AI models ‘forget’ undesirable data hurts their performance

Making AI models ‘forget’ undesirable data hurts their performance
Making AI models ‘forget’ undesirable data hurts their performance
Ads Links by Easy Branches
เล่นเกมออนไลน์ฟรีที่ games.easybranches.com

บริการโพสต์ thainews.easybranches.com/contribute