The technique works by mapping out each face, examining the eyes and the light reflected in each eyeball in incredible detail
University at Buffalo computer scientists have developed a tool that automatically identifies deepfake photos (by up to 94 per cent efficiency) with portrait-like photos by analysing light reflections in the eyes.
“The cornea is almost like a perfect semisphere and is very reflective,” says Siwei Lyu, PhD, SUNY Empire Innovation Professor in the Department of Computer Science and Engineering. “So, anything that is coming to the eye with a light emitted from those sources, then it will have an image on the cornea.
“The two eyes should have very similar reflective patterns because they’re seeing the same thing. It’s something that we don’t notice when we look at a face,” says Lyu.
Examining tiny differences
When we look at something, the image of what we see is reflected in our eyes. In a real photo or video, the reflections on the eyes would generally appear to be the same shape and colour.
However, most images generated by artificial intelligence – including generative adversary network (GAN) images – fail to accurately or consistently do this, possibly due to many photos combined to generate the fake image.
To conduct the experiments, the research team obtained real images from Flickr Faces-HQ, as well as AI-generated facial images that look lifelike but are indeed fake. All images were portrait-like (real people and fake people looking directly into the camera with good lighting) and 1,024 by 1,024 pixels.
The tool works by mapping out each face. It then examines the eyes, followed by the eyeballs and lastly the light reflected in each eyeball. It compares in incredible detail potential differences in shape, light intensity and other features of the reflected light.
While promising, the technique has limitations.
For one, you need a reflected source of light. Also, mismatched light reflections of the eyes can be fixed during image editing. Additionally, the technique looks only at the individual pixels reflected in the eyes – not the shape of the eye, the shapes within the eyes or the nature of what’s reflected in the eyes.
Finally, the technique compares the reflections within both eyes. If the subject is missing an eye or the eye is not visible, the technique fails.
According to Lyu, who has researched machine learning and computer vision projects for over 20 years, deepfake videos tend to have inconsistent or nonexistent blink rates for the video subjects.
Nevertheless, identifying deepfakes is increasingly important, especially given the hyper-partisan world full of race-and gender-related tensions and the dangers of disinformation – particularly violence.
Read more here