“As part of Intel’s Responsible AI work, the company has productized FakeCatcher, a technology that can detect fake videos with a 96% accuracy rate. Intel’s deepfake detection platform is the world’s first real-time deepfake detector that returns results in milliseconds.
Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human— subtle “blood flow” in the pixels of a video. When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake. ”
“DeepFaceLab est un outil de création de deepfakes. Ses concepteurs, qui ont publié en libre accès leur code sur la plate-forme GitHub, estiment que la quasi-totalité des vidéos deepfakes créées le sont aujourd’hui avec ce logiciel. Sur leur page, on y trouve de nombreux exemples : le visage du milliardaire Elon Musk intégré au film Interstellar, l’acteur Arnold Schwarzenegger rajeuni, le chef de l’Etat nord-coréen Kim Jong-un en plein laïus sur la préservation de la démocratie… Mais aussi, plus bas, un lien vers MrDeepFakes, un site pornographique.”
“At first glance, Ramsey’s profile looks like many others on LinkedIn: the bland headshot with a slightly stiff smile; a boilerplate description of RingCentral, the software company where she says she works; and a brief job history. She claims to have an undergraduate business degree from New York University and gives a generic list of interests: CNN, Unilever, Amazon, philanthropist Melinda French Gates. But there were oddities in the photo: the single earring and strange hair, the placement of her eyes, the blurry background. Alone, any of these clues might be explained away, but together, they aroused DiResta’s suspicions […].
That chance message launched DiResta and her colleague Josh Goldstein at the Stanford Internet Observatory on an investigation that uncovered more than 1,000 LinkedIn profiles using what appear to be faces created by artificial intelligence.”
“Which Face Is Real has been developed by Jevin West and Carl Bergstrom at the University of Washington as part of the Calling Bullshit project. All images are either computer-generated from thispersondoesnotexist.com using the StyleGAN software, or real photographs from the FFHQ dataset of Creative Commons and public domain images. License rights notwithstanding, we will gladly respect any requests to remove specific images; please send the URL of the results pages showing the image in question.”
Source : Which Face Is Real?
“Deepfakes have become more believable in recent years. In some cases, humans can no longer easily tell some of them apart from genuine images. Although detecting deepfakes remains a compelling challenge, their increasing sophistication opens up more potential lines of inquiry, such as: What happens when deepfakes are produced not just for amusement and awe, but for malicious intent on a grand scale? Today, we — in partnership with Michigan State University (MSU) — are presenting a research method of detecting and attributing deepfakes that relies on reverse engineering from a single AI-generated image to the generative model used to produce it. Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with.”
“NOTE: The audio quality demonstrated here was additionally degraded since we want to avoid improper use of this technology. The purpose of this video is to excite the class about the potential of deep learning, not to deceive anyone. Thus, we purposely lowered the audio quality before publishing to make the synthetic aspect of this video clearer.”
via Alexander Amini
“Lyu says a skilled forger could get around his eye-blinking tool simply by collecting images that show a person blinking. But he adds that his team has developed an even more effective technique, but says he’s keeping it secret for the moment. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.””
«DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform. If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reasonoverall integrity of visual media to facilitate decisions regarding the use of any questionable image or video».
Source : Media Forensics