Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Decode

Forget AI-Generated Images, AI Editing Tools Are Fuelling Disinformation

Even as fact-checkers try to wrap their heads around AI-generated images, AI editing tools that were a precursor are being misused to spread disinformation.

By - Karen Rebelo | 18 July 2024 10:06 AM GMT

Artificial intelligence (AI) enabled cheap editing tools and apps, while not new, are being weaponised to create cheapfakes adding another layer of complexity in the fight against disinformation.

At least two high profile examples in the last few days show how real photos can be altered using AI editing tools and can be used effectively to peddle disinformation in different parts of the world.

Following the assassination attempt on former United States President Donald Trump, an image from the incident claiming to show three Secret Service agents smiling while escorting Trump to safety, was shared on X.

The text accompanying the photo, which insinuated that the shooting was staged, was most likely altered using FaceApp - an AI photo editing app.

Fake smiles were synthetically added to the faces of the three Secret Service agents in the image.

The original photo, which flew around the Internet, was taken by Associated Press photographer Evan Vucci, shows the officers looking sombre. The original image can be seen here.

Full View

Henry Ajder, an expert on synthetic media, told Decode that AI editing tools follow in the footsteps of ‘cheapfakes’ or fake images or videos created using crude tech.

However, Ajder, who is also the founder of Latent Space Advisory, said that off-the-shelf editing tools can impact how people remember authentic pieces of media.

“I think the difference is when you’re able to kind of take real pieces of authentic media and change them subtly what they could refer to as a selective kind of small edit whether that’s a face swap or inserting objects or removing objects, changing facial expressions, people can recognise that they’ve seen perhaps the original image. So there’s that kind of instinctive, reflexive recognition,” Ajder said.

“But they may not pick up on the fact that ‘oh, you know I saw that now famous image of Trump but I didn’t see that the people were smiling’. So I think that holds a little bit more concern that it can kind of retrospectively kind of change how people feel they remember authentic pieces of media,” he added.

FaceApp Misused To Peddle Disinfo In India Before

This is not the first time FaceApp has been misused to peddle disinformation.

The same app was used in May last year to alter a photo of Indian wrestlers, who were detained in a police van, while protesting against the former head of Wrestling Federation of India and a Bharatiya Janata Party Member of Parliament who is accused of sexual harassment.

The app was used to synthetically add smiles to the faces of protesting wrestlers.

Remaker: AI Editing Tool Used To Create Fake Archival Image

In another recent example from India, AI face swap editing tool Remaker.ai, was used to create a black and white ‘archival’ photo claiming to show Sonia Gandhi - a prominent opposition politician, holding a lit cigarette.

The image was created using a stunning photograph titled Ghazale by photographer Farzad Sarfarazi in 2012 and morphing the model’s face with that of Gandhi.




While the photo would not have raised eyebrows in many parts of the world, it went viral over Facebook with one post shared over 6,200 times. In the past, right-wing Indian social media users have tried to pass off photos of former Bond girl Ursula Andress sporting a bikini, as photos of a young Sonia Gandhi.

In both recent examples from the US and India cited above, these AI editing tools were able to make convincing edits to faces that were not facing the camera.

No Photoshop Skills, No Problem

Apps such as FaceApp and tools such as Remaker that were created for a lark no longer require a person to have knowledge of photo editing softwares such as Adobe’s Photoshop.

‘NO MORE HOURS SPENT ON PHOTOSHOP’, FaceApp’s website states.

The Cyprus-headquartered company was founded in 2017 by Yaroslav Goncharov, a former Microsoft software developer. The app went viral with the ‘FaceApp challenge’ when celebrities posted their synthetically aged photos on social media but soon after privacy concerns dogged the company.

Decode has written to FaceApp for a comment. The article will be updated if we get a response.

The Detection Challenge 

Those in the industry describe deepfakes and detection systems as having a virus-anti virus relationship, where detection systems need to constantly evolve.

Yet a patchwork of detection responses spread across the world are spearheaded by mainly those in academia and small start-ups to an extent. But they don’t attract the funding that goes into the supply side.

Decode tested both images using a deepfake detection tool developed by TrueMedia.org a non-profit organisation founded by AI researcher Oren Etzioni, Professor Emeritus at the University of Washington.

TrueMedia.org’s tool, currently available to journalists and fact-checkers approved by the non-profit, detected manipulations in both images.

“Detecting AI-edited real images versus fully AI-generated ones presents unique challenges. AI-edited real images often involve subtle changes, making them harder to detect compared to fully AI-generated images created using text-to-image models,” the non-profit told Decode, over email.

“For images specifically, we have an ensemble of different models: some distinguish from AI-generated media vs. non-ai-generated media, and others focus on small manipulation within an image to identify if there have been any regions manipulated within the image (be it using AI or manually Photoshopped),” it added.

AI Editing Tools Vs Text-To-Image Generators

AI enabled photo editing apps were a precursor to the explosion of freely available text-to-image generators that have flooded the Internet.

Text-to-image models allow users to enter a text prompt and generate a fully AI-generated image. They’ve opened a Pandora’s box of ethical problems such as questions over copyright of the training data, bias in their depictions of women and people of colour, and their weaponisation to create non-consensual imagery targeting women and children.

Globally most fact-checkers agree that cheapfakes still form the bulk of media that is used to peddle misinformation. In India, for example, genuine videos shared out-of-context are routinely used to spread misinformation. However, AI editing tools take sophisticated tech and turn them into off-the-shelf apps enabling the creation of more cheapfakes.

“Sophisticated tooling that wasn’t available even to the top researchers you know three or four years ago is now available pretty widely and I think the open-source space is a big part of that,” Henry Ajder said.

“It will be interesting to see as more and more ways to access open-source code and to edit open-source code become available with little or no coding experience, how that then potentially shapes and powers the future of the editing tools and particularly the future of editing tools that could be weaponised easily or are intentionally designed for misuse”, he said.