False deep they are videos designed to entice you really think. But there is a way to recognize them

False deep they are videos designed to entice you really think. But there is a way to recognize them

Video Deepfake are difficult to detect for untrained eye, because to be realistic. Whether as a weapon of vengeance hand to manipulate financial markets or to destabilize international relations, the video shows people do and say things they have never done or said a fundamental threat to the idea of ​​long standing that “seeing is believing “. Not anymore. Most fakes are made many deep images of a person by displaying a computer algorithm, and then have to use what seemed to create new facial images. At the same time, his voice is synthesized so that both looks and sounds like the person who said nothing new. One of the most famous fakes deep sounds a warning. Some of my research group earlier works we could see deepfake videos that were not a person, the normal amount of blinks of the eye – but the last generation has adapted by false deep, so that our research continued advance. Now, our research can identify the manipulation of a video, looking closely at the pixel skipping frames. Moreover, we have developed a step, an active measure to protect people from becoming victims of false deep. to find flaws in two recent studies, we described false profound ways to detect faults that can be easily determined by the counterfeiters. When the algorithm of a video deepfake synthesis generates new facial expressions, the new images do not always match the exact positioning of a person’s head, or the lighting conditions or the distance to the camera. To make the false front mixture to the environment, they must be geometrically transformed – rotated, reduced or distorted in other ways. This process can be digital artifacts in the resulting image. You may have noticed some artifacts particularly serious transformations. These can take a look apparently retouched photo as blurred boundaries and smooth skin artificially. subtle transformations still have evidence and we must recognize an algorithm to it, even if people see not taught differences. A real video of Mark Zuckerberg. An algorithm recognizes that this alleged video of Mark Zuckerberg is a fake. These artifacts may change if a video has deepfake a person who is not looking directly into the camera. The video captures a real person is moving his face in three dimensions, but deepfake algorithms are not yet capable of making faces in 3D. Instead, they create the face of a normal two-dimensional image and then try to rotate, resize and distort the image to the direction to fit the person should be looking for. We can do very well, and for the detection offers the opportunity. We have an algorithm that person, the nose, the road shows calculated in an image. It also measures the way in which the head is calculated, the contour of the face with. This should all quite true in a real video of the head of a real person predictable. In false deep, but they are often misaligned. When a computer Elon Musk face Nicolas Cage on his head, can not properly place the face and head in a row. Siwei Lyu, CC BY-ND defense against false deep science of detecting fake deep it is, in fact, an arms race – best counterfeiters is to make their fictions, and so our research is always trying to maintain, and even a little progress. If there was a way to influence the algorithms that create false deep to be worse in their task, it would be our method makes better meet the forgery to be detected. My group recently found a way to do just that. To the left, a face is easily recognizable in a picture in front of our transformation. In between we added noise to recognize an algorithm because other faces, but not the truth. On the right of the changes that are added to the images to be more visible 30 times. Siwei Lyu, CC BY-ND image libraries on surfaces consist of algorithms that process thousands of photos and video online, and learn to recognize the user’s machine and extract faces. A computer may appear in a class photo and see the faces of all the students and teachers, and just add their faces to the library. If the resulting library has many pictures of high quality face, resulting deepfake is more likely to be successful to deceive his audience. We need to add a way to found specifically on noise for digital photos or videos that are not visible to the human eye, but they can fool the face recognition algorithms. You can hide the pixel patterns to find the detectors Face à Face, use and creates references that suggest that a face is where it is not, as in a piece of background or a square of a person’s clothing. Subtle changes can have pictures of face recognition algorithms path. With faces less real and more polluting nonfaces training data, a deepfake algorithm will produce worse on a false face. This is not only slowed down the process of making a deepfake, but also makes the resulting deepfake more flawed and easier to see. While we develop this algorithm, we hope to images to be that someone is uploading to social media or other online site. During the loading process, perhaps you could be asked, “You want to protect the faces in this video or image used against fake products deep” If the user selects yes, then the algorithm could add digital noise, let online people see faces, but effectively hiding them from algorithms that could groped to imitate. This article is published by the conversation with a Creative Commons license. Read the original article.
copyright Image of ALEXANDRA ROBINSON AFP / Getty Images