Some researchers worry that deepfake technology could be a new threat to scientific integrity, but others think traditional image manipulation is here to stay
In mid-March, a one minute video of Ukraine’s president Volodymyr Zelenskiy appeared first on social media and later on a Ukrainian news website. In it, Zelenskyy told his soldiers to lay down their arms and surrender to Russian troops. But the video turned out to be a deepfake, a piece of synthetic media created by machine learning.
Some scientists are now concerned that similar technology could be used to commit research fraud by creating fake images of spectra or biological specimen.
‘I’ve been worried very much about these types of technologies,’ says microbiologist and science integrity expert Elisabeth Bik. ‘I think this is already happening – creating deepfake images and publishing [them].’ She suspects that the images in the over 600 completely fabricated studies that she helped uncover, which likely came from the same paper mill, may have been AI-generated.