AI Models Spit Out Photos of Real People and Copyrighted Images
MIT's Technology Review reports: Popular image generation models can be prompted to produce identifiable photos of real people, potentially threatening their privacy, according to new research. The work also shows that these AI systems can be made to regurgitate exact copies of medical images and copyrighted work by artists. It's a finding that could strengthen the case for artists who are currently suing AI companies for copyright violations. The researchers, from Google, DeepMind, UC Berkeley, ETH Zürich, and Princeton, got their results by prompting Stable Diffusion and Google's Imagen with captions for images, such as a person's name, many times. Then they analyzed whether any of the images they generated matched original images in the model's database. The group managed to extract over 100 replicas of images in the AI's training set.... The paper with title "Extracting Training Data from Diffusion Models" is the first time researchers have managed to prove that these AI models memorize images in their training sets, says Ryan Webster, a PhD student at the University of Caen Normandy in France, who has studied privacy in other image generation models but was not involved in the research. This could have implications for startups wanting to use generative AI models in health care, because it shows that these systems risk leaking sensitive private information. OpenAI, Google, and Stability.AI did not respond to our requests for comment. Slashdot user guest reader notes a recent class action lawsuit arguing that an art-generating AI is "a 21st-century collage tool.... A diffusion model is a form of lossy compression applied to the Training Images."
from Slashdot https://ift.tt/eK7PDHR
Read more of this story at Slashdot.
from Slashdot https://ift.tt/eK7PDHR
0 Response to "AI Models Spit Out Photos of Real People and Copyrighted Images"
Post a Comment