If you have ever wondered about deep fakes, their purpose, and how to spot them, this article is for you. Join us today as we explore AI-generated fake video technology and learn its negative and positive role in the evolution of our digital society.
The Ethics Of Artificial Intelligence
Back in the 1770s, Europe was dazzled by an invention that promised to change the world as we know it: a human-shaped automaton called the Mechanical Turk that would dare the guests of the Habsburg Court to a game of chess. A man-made machine acts on its own free will. Or so it seemed because the Mechanical Turk was nothing but a hoax, a wooden puppet activated by a stagehand hidden under the chess table.
However, this elaborate roadshow apparition was one of the first to seed a special thought in humans: the existence of artificial intelligence.
Fast-forward to 2021. We have not been outnumbered by vengeful, self-aware machines (as we constantly feared). However, the dangers of artificial intelligence are real, and they exist in a much more subtle form.
Computers have been getting better and better at simulated reality for the past decades. Hollywood relies heavily on Computer-Generated imagery to replace traditional practical sets and props and has begun featuring more CG characters in recent years.
Of course, out-of-this-world characters like Jar Jar Binks and Thanos are easy to spot. But the real challenge is making computer-generated characters blend seamlessly with their real-life counterparts.
Through this technology, famed British actor Peter Cushing reprised his role as villainous Moff Tarkin in the 2016 Star Wars Rogue One film, even though he died in 1994.
As you can imagine, using the image of an actor who’s been dead for 22 years is just the tip of the questionable ethics iceberg. Enter the strange world of deepfakes.
What Are Deepfakes?
Before looking at deepfakes, let’s first break down the terminology.
While the “fake” part is prominent enough, the “deep” is short for “deep learning.” This involves a type of algorithm called “neural networks” that learns to replicate patterns by analyzing large data sets.
Thanks to computing prowess created with machine learning algorithms, a convincing photo, video footage, or audio file can be created or manipulated, making it almost indistinguishable from the real deal to the naked eye. This means individuals can be depicted doing things they didn’t do, with a great chance of deceiving the audience.
The reasons people create deepfakes are many. And even though they are usually associated with pornography (incredibly fake celebrity sex tapes), there are several other motives.
In March 2021, a cheerleader’s mom was accused of creating pornographic deepfakes to harass another girl on the team. We recently learned how a voice deepfake was used to scam a CEO out of $243,000.
A Sensity statistic shows that as of December 2020, almost 50,000 deep fake videos had been uploaded online, marking an over 330% increase since July 2019. The algorithm is so efficient that it can use personal photos available on Social Media to make believable videos. Thus, we might be unknowingly aiding the deep fake creator. So when did it start?
How Deepfakes Can Impact Your Reputation
You might think you’ll never be the target of a fake video. But in our experience, as technology advances, it moves from affecting celebrities, politicians, and high-profile figures to impacting the lives and reputation of the average citizen.
As mentioned in this article, facial recognition apps are increasing. Who knows the extent of what companies do with the content they gather from innocent users thinking they are just playing a simple game, creating an avatar, or checking who their “twin celebrity” is? In reality, this is food for deepfakes!
So, being internet-privacy-savvy can keep you and your loved ones safe from harm to your online personal and brand reputation caused by mal-intended deep fakes.
The History Of Deepfakes
Image manipulation is certainly not something new and has been around roughly since the invention of photography. So is the practice of face swapping. One of Lincoln’s most iconic portraits, from the 1860s, is a composite! The head of Honest Abe is superimposed on the figure and background of John C. Calhoun, posing in a heroic stance.
The Soviet Union became infamous in the early to mid-20th century for its blatant use of photo manipulation, altering history in the process. Over time, people grew accustomed to forged images but continued to trust videos, which were much more difficult to manipulate.
The real game-changer came in 2017 when a Reddit user calling himself “deepfakes” first shared a batch of manipulated pornographic videos showing celebrity faces added to adult performers’ bodies. The user soon revealed that his codes were based on multiple open-source libraries.
Images from large databases such as Google, YouTube, and other stock photo archives were scraped together to compile enough facial material for the video. The effects were surprisingly life-like, with the facial expressions blending in with the video’s general light, shade, and ambiance.
Once the user shared the codes used to create the deep fakes, the mass media quickly followed up on the subject. This inspired other copycats to produce and circulate manipulated celebrity content.
Reddit banned the user, but the deed was done, and the technology was now freely available online. FaceSwap, FakeApp, and other specialized software emerged and developed more user-friendly versions, allowing average users to tinker with their own deep fakes. The rest, as they say, is internet history.
How Is a Deepfake Made?
Since you probably better understand what deepfakes are now, let’s look at how they are made.
Autoencoders and generative adversarial networks (GANs) are the basis for the main computer learning techniques that make deepfakes possible.
GANs use two neural nets: a generator and a discriminator that constantly compete against each other, allowing machines to learn rapidly.
Imagine them as an artist and critic, constantly competing against each other. While the generator attempts to create a realistic image, it’s the discriminator’s job to determine whether it’s dealing with a deepfake.
If the generator manages to dupe the discriminator, the discriminator uses the gathered info to become a better adjudicator. At the same time, if the discriminator marks the generator’s image as fake, the second net will improve its image-creating abilities. The cycle is never-ending.
Deepfake algorithms analyze images for imitation nuances, which are made up of the sum of movements and expressions a person makes. They then blend the two images together, recreating the initial person’s micro-expressions in addition to other facial and body movements, lighting conditions, and shading, perfectly blending the added footage with the surrounding environment.
The Dangers Of Deepfakes And Machine Learning
Wondering what are the main dangers deepfakes pose? Well…there’s a lot to talk about, so sit tight!
While most of us probably use deep fake technology simply to add our face to a funny GIF and get a few laughs, the new dangers generated by this emerging technology are just beginning to be understood.
Besides pornographic material being easily produced (leading to a new type of revenge porn), deepfakes have also shown potential as a political tool.
Videos showing public figures, such as candidates, can be manipulated into making false statements and influencing voters during a campaign. Since it has been proven that fake news gets distributed much faster than verified sources, this is now considered a national security issue.
For example, during the 2020 election, several videos emerged showing controversial world leaders such as Vladimir Putin or Kim Jong Un commenting on undergoing US events (the storming of the Capitol is one of them).
While the speeches were tongue-in-cheek, the creators were accused of hiding under the cloak of satire, fully knowing that many less tech-savvy viewers would take them at face value.
As we anticipated before in this article, another issue that has been hotly debated is what happens with all the facial data that users freely and willingly submit when using face manipulation apps such as the Russian-made FaceApp. Where are they stored, and could they be used for deepfakes? Also, how can these affect a user’s reputation if used for malicious purposes?
What Are The Current Deepfakes’ Laws?
The judicial system is also under pressure. While video evidence has always been open to interpretation, there are now several ways a deep fake video can infect a court case, from clients fabricating evidence to win to fake videos finding their way into archives historically considered trustworthy.
On December 20, 2019, President Donald Trump signed what became the nation’s first federal law regarding the new technology. Deepfake legislation is now part of the NDAA (National Defense Authorization Act) for the Fiscal Year 2020, a $738 billion defense policy bill passed by the House and Senate.
A comprehensive report has been required on the potential weaponization value of this new emerging technology, especially by foreign powers considering China and Russia’s technical capabilities.
The NDAA now requires the government to notify Congress of any foreign deep fake disinformation activity targeting US elections. The government is also encouraged to fund research for more deep fake detection technologies.
Also, in 2019, Virginia became the first state to impose criminal penalties for the distribution of unconsented deepfake pornography, making it a Class 1 misdemeanor, punishable by up to 1 year in jail and a $2,500 fine.
Texas followed suit in September of that same year, prohibiting the creation and distribution of deep fake videos intended to harm the reputation of public office candidates or influence elections. By Texas law, a deep fake video is labeled as “a video created with the intent to deceive, appearing to depict a real person performing an action that did not occur in reality.”
Two more laws enacted by the State of California allow the victims of unconsented deepfake porn to sue for damages alongside candidates running for public office that have been targeted with election-related deepfakes without warning labels.
A similar law is being reviewed by the UK government, specifically dealing with the making and sharing of unconsented intimate images. The same commission also tackles other digital misdemeanors, such as the extent to which revenge porn can be labeled as such and cyber flashing (which involves sending unsolicited sexual images).
Other private endeavors have been set up across the world. The DFDC (DeepFake Detection Challenge) is one of them. Its goal is to be one step ahead in creating detection technologies and protecting users from malicious actions. The collective effort generated more than 35,000 deep fake detection models.
What Are The Upsides Of Deepfakes?
Like pretty much everything else online, it’s not all that bad. Many industries, such as entertainment, gaming, social media, education, digital communications, healthcare, science, and business, are taking advantage of the technology’s positive aspects.
The film Industry can benefit from deep fake technology in several ways. Apart from the ethical implications of casting departed actors without their direct consent, it can also improve amateur videos to professional quality or recondition old, deteriorated footage.
Extensive research is being carried out in the medical field to help recreate digital voices for people who have lost their ability to speak due to illness.
GAN technology has also been successfully used to predict anomalies in X-ray images (negative samples) by reconstructing both normal and abnormal images’ input images and discriminating between them.
Smart assistants such as Google Assistant, Siri, or Alexa may still be in their early stages, but they already provide help and company. With software using deep learning audio technology, they’ll soon provide an even more realistic experience, gaining more human traits.
Another controversial service helps people deal with loss by digitally bringing a deceased loved one “back to life.” However, there’s a gray area between “comforting” and “traumatic.” Psychologists specialize in the grieving process, and its recovery is pretty much split down the middle in this matter.
A Look Into The Future Of Deepfakes And Artificial Intelligence
Though widespread in Social Media, Facebook, Twitter, Instagram, and LinkedIn are still considered pioneering platforms; they are the first “real generation” social platforms after MySpace and Mirc.
One might rightly speculate about the new social platforms that will replace them in the upcoming decades. But one thing is for sure: deep fakes are certainly going to be an integral part of these.
Perhaps virtual reality will also play an important role, creating an alternate digital medium for people to interact in, resembling reality even more closely. The success of video games such as Second Life is an argument for this projection.
Custom-created avatars could be set to mirror better users’ expressions, mannerisms, and overall personality (which would make MMOs even more interesting). So, while The Space Race predictions about the 21st century may have proven a tad too optimistic, we have every reason to believe that the digital realm will be what truly defines our lifetime.
What Are Deepfakes? Wrapping Up!
Deepfakes are here to stay, following a century-old trend of image manipulation.
Advancements in AI technology will continue to blur the lines between what is tampered with and what is real, while social media will continue to spread data worldwide in seconds. It is important to understand what deep fakes are and how they work to protect ourselves and our loved ones from any harm they might inflict as political or entertainment tools.
If deep fakes have you worried and you would like to learn more about your internet privacy solutions and personal reputation management, don’t hesitate to get in touch. Our team has the tools and technology to restore, protect, and defend your online reputation.
For more similar topics, take a look at: