WATCH LIVE

LIVE STREAMING
Source: Getty.
Source: Getty.

Sex, lies and deepfakes: Why are legislation and digital education on fake videos necessary?

The legal and moral battle against the AI fake news.

MORE IN THIS SECTION

Hispanic culture on cinema

HHM Authors to Note

Celebrating Latino Artists

Use of Face Veils

Prince Harry in NYC

Mexico supports Coca-Cola.

HispanicHeritage on Broadway

Whittier Boulevard reprises

SHARE THIS CONTENT:

Last Thursday, the United States House of Representatives conducted the first hearing dedicated to deepfakes - better known as fake news optimized by Artificial Intelligence (AI).

This is the best moment to carry out such a hearing, where examples of high-ranking deepfakes were shown: fake videos of Mark Zuckerberg on Instagram and another one of Kim Kardashian, all created by Bill Posters and Daniel Howe, artists that take part in a visual project raising awareness about data privacy.

The representatives also evaluated an AP report about how a spy used a non-existent face on LinkedIn to infiltrate Washington’s political scene.

All the false information that was thrown on social media during the 2016 presidential election was only the beginning, but it set the stage for new digital phenomena such as deepfakes.

This new debate has reached television series such as Years and Years, where the use of artificial intelligence in electoral videos was portrayed in one episode as a threat to democracy.

Deepfakes can literally make politicians or celebrities say anything you can think of.

AI ​​is moving at an accelerated pace, and the amount of data needed to simulate a video has been drastically reduced. The most worrying aspect is that there is no way to stop this type of practice due to the lack of government legislation, and the close to non-existent digital education for users worldwide.

However, video deepfakes don’t have to be absolutely false. The recent controversy over Nancy Pelosi's video where she simply slowed the time of the images to make her look drunk is an example of cheap fakes that got out of control: the video went viral on Facebook and received millions of visits in 48 hours.

There is now editing software available that allows users to cut video and sound to produce a copy of an almost real character that says or makes a degrading comment in public. Pelosi's case is one of them.

In countries like India, these practices have reached new levels of intimidation, especially against women journalists.

Rana Ayyub — an Indian journalist who is the latest victim of deepfake efforts, has been intimidated again through deepfake pornographic video with a montage of her face that has gone viral.

Lies have always been there. The problem now is how to address the current AI scenario with educational tools that can help people face a threat and take security measures.

For example, Mark Zuckerberg claimed that Pelosi's video would not be removed from Facebook, nor would his video be deleted on Instagram. Although Pelosi's video is now shared on Facebook and recognized as false, why can’t anyone delete it? Could it imply censorship? Or is it just another way of generating traffic at the expense of someone else’s reputation?

Faced with the thousands of questions around technological advances, the House of Representatives has met to discuss the issue over and over again, while the clock keeps ticking and the 2020 elections get closer and closer.

That is why Rep. Yvette Clarke has presented a bill against deepfakes that offers recommendations to create a structure against bias and open a path to hold both companies and researchers in AI accountable.

Clarke's proposal requires that platforms must also invest in countermeasures, integrating functions for the detection of manipulated content on their platforms.

Considering that the legislation is years behind concerning deep technology, several witnesses in the audience said that all those who use the means of distribution – that is, social media - should be more cautious to avoid spreading false news and videos.

"People who share this stuff are part of the problem, even if they don’t know it," said David Doermann, an expert guest speaker in the hearing and director of the Institute of Artificial Intelligence at the University of Buffalo.

Doermann's point of view is accurate and amplifies criticism towards all users, platforms, and governments, begging the question: Is it time to formally apply digital education? Or is it already too late?

  • LEAVE A COMMENT:

  • Join the discussion! Leave a comment.

  • or
  • REGISTER
  • to comment.
  • LEAVE A COMMENT:

  • Join the discussion! Leave a comment.

  • or
  • REGISTER
  • to comment.