Rashmika Mandanna deepfake row: What happened and how to identify such videos

On November 5, a video of actor Rashmika Mandanna emerged online where she can be allegedly seen entering an elevator. The video went viral on social media, but soon it was revealed that it was not Mandanna at all. In fact, it was a masterfully created deepfake video where the actor’s face was superimposed on that of a British-Indian influencer. The video has now created a huge row over just how dangerous is this AI-powered technology, how can it be spotted to mitigate misinformation, and how can people protect themselves from being impersonated.

Before we get into the incident, we should know what deepfake means. Deepfake is an AI technology where media, such as photos, videos, and audio are hyper-realistically manipulated to make it seem extremely real. Mandanna has become a victim of the latest such attack.

We are now on WhatsApp. Click to join.

The Rashmika Mandanna deepfake row

A small six-second clip of the actor was shared online, the original uploader is unknown, and it went viral. In the video, Mandanna can be seen entering a lift. However, soon, AltNews journalist Abhishek posted on X, highlighting that it was a deepfake. In a series of posts, he said, “There is an urgent need for a legal and regulatory framework to deal with deepfake in India. You might have seen this viral video of actress Rashmika Mandanna on Instagram. But wait, this is a deepfake video of Zara Patel”.

“The original video is of Zara Patel, a British-Indian girl with 415K followers on Instagram. She uploaded this video on Instagram on 9 October…From a deepfake POV, the viral video is perfect enough for ordinary social media users to fall for it,” he added.

Amitabh Bachchan, Rajeev Chandrasekhar react to the video

As soon as the deepfake video was exposed, many celebrities and important leaders began reacting to the situation. One of the first among them was actor Amitabh Bachchan, who will also feature as a costar in Mandanna’s upcoming film Goodbye. He posted on X, “yes this is a strong case for legal”.

Union Minister Rajeev Chandrasekhar also posted on X highlighting that the “Govt is committed to ensuring Safety and Trust of all DigitalNagriks using Internet”. Calling deepfakes the latest and extremely dangerous and damaging form of misinformation, he explained that it “needs to be dealt with by platforms”.

Mandanna herself took to X and said, “I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused”.

Patel, the woman whose video was deepfaked by bad actors, issued a statement on her Instagram account and said, “It has come to my attention that someone created a deepfake video using my body and a popular Bollywood actresses face. I had no involvement with the deepfake video, and I’m deeply disturbed and upset by what is happening. I worry about the future of women and girls who now have to fear even more about putting themselves on social media. Please take a step back and fact-check what you see on the internet. Not everything on the internet is real”.

How to spot deepfakes and protect yourself from it

The Massachusetts Institute of Technology (MIT), which has its own dedicated AI and ML research department, has published some helpful tips that people can use to differentiate between deepfakes and real videos. A few of them are listed below.

1. Pay attention to the face. High-end DeepFake manipulations are almost always facial transformations.

2. Pay attention to blinking. Does the person blink enough or too much?

3. Pay attention to the lip movements. Some deepfakes are based on lip-syncing. Do the lip movements look natural?

In the Mandanna/Patel deepfake video, all of these three issues are present, and even in a video as short as six seconds, you can spot them with some careful observation.

It has also become important to protect yourself from deepfakes, as some scammers have begun using it to trick victims by making them believe who they are talking to on a video or audio call is someone they know.

To protect yourself:

1. Ask the person to wave their hands in front of their face. The deepfake videos made with the current technology cannot sustain itself with obvious disruption.

2. Never send money to someone on a whim after receiving fake videos purportedly from friends or kin. Always call their other number, or other family member to verify first.

3. Ask them something personal to confirm they are who they’re claiming to be.

For most people, there is no fear of being deepfaked themselves, since the training data that is required to create such superimpositions is quite large, and unless you have tons of photos and videos online of yourself, it will be hard for the AI model to create a perfect deepfake, especially if the lateral face view is shown.

One more thing! HT Tech is now on WhatsApp Channels! Follow us by clicking the link so you never miss any updates from the world of technology. Click here to join now!

Source link

Source: News

Add a Comment

Your email address will not be published. Required fields are marked *