Invisibility cloaks have always been the stuff of science fiction, but now, people might soon be able to live out their Harry Potter dreams. Recently, a team of researchers at University of Maryland, College Park, working with Facebook Artificial Intelligence, developed a real-life "invisibility cloak''. The cloak is actually a colourful sweater that deletes you right out of a machine's vision.
Notably, the research team used adversarial patterns on the sweater that evade most common object detectors, making the person undetectable, according to a Gagadget report. Simply put, the sweater makes a person 'invisible' in front of the AI models that detect people.
The developers started out with the original goal of testing machine learning systems for vulnerabilities, however, the result was a print on clothes that AI cameras can't see. A user on Reddit shared a video of the test footage, with a caption that reads, '' This sweater developed by the University of Maryland utilizes "adversarial patterns " to become an invisibility cloak against AI.''
Watch the video here:
"This stylish pullover is a great way to stay warm this winter whether in the office or on the go. It features a stay-dry microfleece lining, a modern fit, and an adversarial pattern that evades the most common object detectors. In [our] demonstration, the YOLOv2 detector is evaded using a pattern trained on the COCO dataset with a carefully constructed objective," the team wrote.
So, how does it work?
The researchers explained that they used the SOCO dataset on which the computer vision algorithm YOLOv2 is trained, and identified a pattern that helps to recognize a person. The same created an opposite pattern and transformed it into an image-a print on a sweater. As a result, the owner of such a sweater can hide from detection systems.
Explaining the work, the team wrote on the University of Maryland website, "Most work on real-world adversarial attacks has focused on classifiers, which assign a holistic label to an entire image, rather than detectors which localize objects within an image. Detectors work by considering thousands of 'priors' (potential bounding boxes) within the image with different locations, sizes, and aspect ratios. To fool an object detector, an adversarial example must fool every prior in the image, which is much more difficult than fooling the single output of a classifier."
Though many on Reddit were fascinated by the ''magic sweater'', others questioned its effectiveness. One user joked about it and said, ''So ugly even AI doesn't want to see it.'' Another commented, '' I mean. invisibility seems a bit pushing it. The camera is still recognising him, just not 100%....Am I wrong in thinking, let's say if police were using this to find criminals?''
According to a Hackster report, the YOLOv2-targeting adversarial sweatshirts hit only around a 50 per cent success rate in the wearable test.