OpenAI's State-of-the-Art Machine Vision AI Fooled By Handwritten Notes
Researchers from machine learning lab OpenAI have discovered that their state-of-the-art computer vision system can be deceived by tools no more sophisticated than a pen and a pad. The Verge reports: As illustrated in the image above, simply writing down the name of an object and sticking it on another can be enough to trick the software into misidentifying what it sees. "We refer to these attacks as typographic attacks," write OpenAI's researchers in a blog post. "By exploiting the model's ability to read text robustly, we find that even photographs of hand-written text can often fool the model." They note that such attacks are similar to "adversarial images" that can fool commercial machine vision systems, but far simpler to produce. [T]he danger posed by this specific attack is, at least for now, nothing to worry about. The OpenAI software in question is an experimental system named CLIP that isn't deployed in any commercial product. Indeed, the very nature of CLIP's unusual machine learning architecture created the weakness that enables this attack to succeed. CLIP is intended to explore how AI systems might learn to identify objects without close supervision by training on huge databases of image and text pairs. In this case, OpenAI used some 400 million image-text pairs scraped from the internet to train CLIP, which was unveiled in January.
from Slashdot https://ift.tt/3t2Aigo
Read more of this story at Slashdot.
from Slashdot https://ift.tt/3t2Aigo
0 Response to "OpenAI's State-of-the-Art Machine Vision AI Fooled By Handwritten Notes"
Post a Comment