Sony has presented a patent that could transform the way digital content is consumed, titled “automatic and personalized editing of video content using artificial intelligence.” This technology allows for real-time modification of offensive content in video games and movies, using methods such as deepfake, blurring, or removal of sections. Users will be able to create custom filters to censor what they consider inappropriate. Censorship or adaptation? Among the possible uses of this innovation is parental control. For example, if there is offensive language in a video game, parents could activate this system to silence it or […]
Sony has presented a patent that could transform the way digital content is consumed, titled “automatic and personalized editing of video content using artificial intelligence.” This technology allows for real-time modification of offensive content in video games and movies, using methods such as deepfake, blurring, or removal of sections. Users will be able to create custom filters to censor what they consider inappropriate.
Censorship or adaptation?
Among the possible uses of this innovation is parental control. For example, if a video game contains offensive language, parents could activate this system to mute or alter it, thus ensuring a more appropriate experience for minors. However, this type of technology would not be exclusive to parents, but would also allow any user to adapt the content to their own preferences, from avoiding situations that they find uncomfortable to omitting elements they consider offensive.
The introduction of an automated censorship system has raised concerns about its impact on the artistic experience of video games. Many experts argue that, in titles like The Last of Us Part 2, censorship could alter the original message of the game, as certain moments of violence are essential for understanding its narrative. The possibility of suppressing content could lead to significantly different and less impactful gaming experiences.
Despite the patent presentation, it is important to highlight that this does not guarantee its immediate implementation in future Sony products. Historically, the company has registered innovations that never made it to market, suggesting that this patent could simply be a way to protect exploitation rights without clear intentions for upcoming development.
Deepfake videos are becoming insanely good. Essentially, AI allows you to face-swap anyone into any video. Sometimes, this can be used for evil. Sometimes, it can be used for fun.
But today, we’re simply enjoying “The Shining,” as if Jim Carrey had starred instead of Jack Nicholson. It’s nearly impossible to see that this isn’t real:
And there’s a Part 3 coming! It’s probably worth subscribing to Ctrl Shift Face on YouTube to keep track of these.
It’s pretty fun to watch these videos! And then you realize that this technology is going to be used to destroy our democracy and then you curl up into the fetal position and try to remember how to breathe.
Deepfake videos can heavily influence voters, but will Facebook do anything to stop them?
“Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures. I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.”
Those sinister words came from the mouth of Facebook CEO Mark Zuckerberg… or so you’d think. The video was actually a deepfakemade for a much different purpose. Take a look.
Facebook has been trying to do more to stop the spread of fake news. They have increased their reporting tools for users so that they can work quickly to get the content flagged.
That Pelosi video had millions of views. Although Facebook eventually labeled the video as “fake,” they did not remove it from the site. However, the video got thousands of views before getting flagged. Now, Facebook is left to figure out if they will remove the video, and how long will they wait before they take action.
Many individuals, including Pelosi herself, called out Facebook for not removing that video. However, Facebook stuck by its guns.
“If it was the same video, inserting Mr. Zuckerberg for Speaker Pelosi, it would get the same treatment,” said Facebook director of public policy Neil Potts at a parliamentary hearing.
With this new deepfake video of Zuckerberg, we will see if what Potts said was true. Will Facebook let the video stay up? Or will it take a different stance when the CEO is the target?
Essentially, the software analyzes hours of authentic footage of political influencers like Pelosi, Donald Trump, and Joe Biden. The software pays attention to things like how their head moves, speech patterns, and facial expressions so that it can tell if something is authentic.
Deepfake videos can make anyone say or do anything
According to the researchers, the software is about 95% accurate, but they are optimistic that they can get it up to 99% within the next six months. Until detection software can outpace the fake video producers, a quick trick is to look at the eyes. Deepfake-created humans almost never blink.
With each successive political campaign, we see new tools being used to spread misinformation. In 2016, social networks were used to target certain groups. After seeing a constant stream of mistruths and outright lies, many voters made their choice without an accurate picture of the candidates and policies on both sides.
Today, the misinformation war is becoming even more heated. Doctored videos of House Speaker Nancy Pelosi are being shared on social media:
The doctored video has been viewed on Facebook more than 2.4 million times. We don’t know whether these videos are being produced by a single person, a group, or as part of some kind of larger campaign, but the effects can be devastating.
Political misinformation is nothing new, but we’re now seeing extraordinary amplification of these stories thanks to actual politicians and their social media accounts. Last night, former New York City mayor Rudy Giuliani (President Trump’s personal attorney) shared the altered video on Twitter with the message, “What is wrong with Nancy Pelosi? Her speech pattern is bizarre.” Giuliani later deleted the tweet.
The President himself shared a highly edited video to push his chosen narrative of Speaker Pelosi.
Any amount of editing can change the slant of a story. We often hear complex conversations summarized in one brief soundbite on the news. Even the selection of a specific photo can paint a positive or negative picture of a candidate. In 2004, Howard Dean’s political campaign was completely derailed because of a scream caught by a microphone:
But the thing is, the people in the room couldn’t hear Dean’s scream. It was incredibly loud in the venue. But because TV networks had access to Dean’s isolated microphone, we didn’t hear the sound in context. It didn’t matter. The die was cast.
So political narratives have been shaped from an out-of-context scream and now, doctored videos. The biggest threat, however, is yet to come. Deepfake videos can make anyone appear to say or do anything.AI can generate fake humans. Just as the computer graphics in Hollywood blockbusters become more convincing, we are quickly closing in on an era when we won’t be able to believe our eyes. And that leads us to a very dangerous place.
George Orwell called it in “1984:”
“The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
Except in this case, the evidence being presented to our eyes and ears is being manipulated. Politicians have always lied and mischaracterized their opponents. But we are quickly moving into an era where those lies can be backed up with faked video evidence, then amplified across social media until the damage is too great to overcome.
A quote often misattributed to Mark Twain says, “A lie can travel halfway around the world while the truth is putting on its shoes.” And that’s been true since before the days when everyone in the world had immediate access to an audience of billions.
The social networks themselves aren’t helping. YouTube took down the altered Pelosi after Axios got in touch with them. Facebook said it would only reduce the video’s reach if they felt it was misleading, but they wouldn’t remove it. Twitter is allowing the altered clip to remain online. How do you fight back against global platforms and itchy clicking fingers?
It’s up to all of us to slow down before retweeting or sharing something shocking. Today, and in the years to come, the faked videos will become even more sophisticated. It’s up to our vigilance and reliable news organizations to make sure the truth isn’t twisted.
The future of video manipulation could have terrifying consequences.
Imagine being able to make it look like any person in the world is doing or saying anything you want. With deepfakes, this is not only possible, but free and easy.
For those who don’t know, deepfakes are videos that use motion capture technology to impose someone’s face onto another person’s body or to impersonate another person entirely. A basic example of the technology can be seen below:
As you can see, the technology is incredibly realistic. With a skilled vocal impersonator, it can be very difficult to tell a deepfake from an actual video.
Deepfakes aren’t the result of some sort of top-secret CIA technology, anyone can download an app and make them without much hassle. Deepfakes are created by machine learning algorithms that manipulate video footage until the “realistic” algorithm can reliably be fooled. The technology works best when the algorithms are fed tons of footage, which explains why presidents, actors, and other public figures are the most frequent subjects of deepfakes.
There are several ramifications of this technology and nearly all of them are scary. Before we dive into all that, here’s the only harmless example of deepfaking that we’ve seen: Nicolas Cage being inserted into movies he never starred in.
And here’s Steve Buscemi’s face mapped over Jennifer Lawrence for some terrifying reason.
Despite how great those videos are, the vast majority of deepfakes are created for far more nefarious purposes. Currently, the most common use of deepfakes by far is to put the faces of popular celebrity women onto pornstars.
This deepfake has “Star Wars” star Daisy Ridley’s face put on a porn star’s body
Numerous victimized women have spoken out against this gross trend, including Scarlett Johansson, who said, “Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired. The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause… the internet is a vast wormhole of darkness that eats itself.”
It’s not just celebrity women either. An increasing number of people have fallen victim to deepfake revenge porn from vengeful exes. Because the technology is so new, there’s no concrete legislation to protect people from being deepfaked into porn. Much discussion has been had as to whether the creators of these videos could be charged as identity theft, harassment, or cyberstalking. At the very least, nearly every major porn site bans the uploading of deepfake videos. Mainstream websites like Twitter, Reddit, and Gyfcat have also banned the posting of deepfake videos.
In addition to sexual harassment, deepfakes could also have a devastating effect on politics. Deepfakes could be used to make powerful people say alarming or sensational things. It also works in reverse, as politicians have plausible deniability if they get caught saying or doing something embarrassing (for example, if deepfakes were around in 2016 President Trump could have claimed the infamous “grab them by the ****” video as a deepfake). The last U.S. presidential election already had enough fake news floating around, so imagine the chaos and misinformation that would be caused by widespread deepfake use.
While national defense organizations like the CIA are at work developing technology that can distinguish deepfakes, the moral of the story is to use research and critical thinking whenever you see a video that seems suspicious. Deepfakes are the newest addition to the post-truth/fake news era, and it is more important than ever that people carefully consider where their sources are coming from.