This article is a guest post on NoCamels and has been contributed by a third party. NoCamels assumes no responsibility for the content, including facts, visuals, and opinions presented by the author(s).
Ryan E. Long is a non-residential fellow of Stanford Law School’s Center for Internet and Society and Vice-Chair of the CA Lawyers Association, IP Licensing Interest Group. In addition, he has written for or been interviewed by publications such as The Nordic Blockchain Association, El Pais, Cognitive Times, and Digital Trends about new tech subjects such as artificial intelligence, blockchain and “deep fake” videos. Currently, he is an adjunct professor of media law at Pepperdine Law School in Malibu, California.
Picture this: Tomorrow morning you get an audio message on your cell phone from The International New York Times: “Jerusalem: 1,400,000 New Coronavirus Cases!” Within minutes, there is a city-wide panic. You then scratch your head. “Wait a minute, the last census showed there are only about 931,756 people living in Jerusalem.” Luckily, you find out later in the day that the newspaper was hacked and that the story was created by malicious artificial intelligence (AI). Think this is imaginary? Not quite.
Recently, the Jerusalem Post reported that the Palestinian Authority Preventive Security Service arrested two Palestinians suspected of being behind a fake audio message about the discovery of coronaviruses in the city. Can artificial intelligence (AI) be used to ferret out fake news – or deep fake videos – before they go viral?
Fake news is defined as “stories that are provably false, have enormous traction [popular appeal], and are consumed by millions of people,” according to a 60 Minutes segment aired in 2017 just as the term was becoming part of the vernacular. “Deep fake” videos are cousins of fake news. These videos often involve perversions – often imperceptible — of original videos. One recent famous “deep fake” video example, as reported by The Times of Israel, is of Facebook owner Mark Zuckerberg bragging about controlling “billions of people’s stolen data, all their secrets, their lives, their futures.” The video is satire. But others aren’t – such as this deep fake of Prime Minister Benjamin Netanyahu.
Up until now, most fake news and deep fake videos have been produced by humans. That is quickly changing. AI-created fake news and videos — often via bots — are proliferating.
Some of this content is clearly satire. For example, The Onion, a satirical website in the US, has this as a headline “CDC Releases Instructions For All Americans To Make Their Own Hospitals.” Such obviously fake news can make people laugh. This is good. However, as we see in the example above, the Israeli market is just as vulnerable to the existence and spread of fake news that isn’t satire, particularly during times of crisis like this.
Much of fake news – and “deep fake” content – can be defamatory. A great deal of it includes material statements of fact – not opinion or satire — that recklessly disregard the truth about a public figure or are made with “actual malice.” This standard, set forth by the U.S. Supreme Court in New York Times v. Sullivan, was adopted in 1977 by the Israeli Supreme Court in Haaretz v Israel Electric Company. However, this legal standard is very difficult to satisfy. For example, in 2014 former Minnesota governor Mr. Jesse “The Body” Ventura won a $500,000.00 defamation award against Navy Seal Mr. Chris Kyle. In his book, Mr. Kyle claimed that he “decked” Mr. Ventura for making statements critical of the Seals. On appeal, the court reversed and remanded because the “actual malice” jury instructions were in error. Even with an easier legal standard, it is nearly impossible to detect and preempt libelous fake news or “deep fake” videos before they go viral via bots, trolls, or otherwise.
Sign up for our free weekly newsletter
SubscribeThis elusive aspect of fake news and “deep fakes” make them ideal vehicles to disrupt the free flow of accurate information. Rather than voters or consumers making their decisions based on accurate information about a candidate or product, respectively, such media can skew their views. While the information market in the U.S. has become quite saturated with fake news and deep fake videos, the Israeli market is catching up. The deep fake of Prime Minister Netanyahu is but one example in the Israeli media market.
Nonetheless, companies like Facebook, for example, have opted not to ban or otherwise regulate patently fake political ads. Twitter, on the other hand, has opted to ban deceptive fake media that can cause “serious harm.”
Israeli and non-Israeli companies have developed AI to combat the spread of fake news. Tel Aviv-based company Cheq, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. After a red flag shows, Cheq prevents its clients from buying advertisements on the page in question. If content is more malicious or sinister, Cheq contacts the publisher or platform.
The MIT-IBM Watson lab recently launched a study on how AI-created fake content can be countered by AI. Oftentimes, AI-generated fake news uses predictable combinations of words in its syntax. To counter this, the Watson lab is using AI which can be used to pick up these artificial syntax patterns to flag suspicious content – just as spell check does with incorrectly spelled words.
However, AI countermeasures to battle “computational propaganda” have their own set of boomerang issues. Stanford computer science professor Yoav Shoham notes that AI “still can’t exhibit the common sense or the great intelligence of even a 5-year-old.” As a consequence, regardless of how well AI is programmed, it doesn’t understand the nuances of satire and sarcasm. An obvious joke to a human, because of the speaker’s intonation and/or facial expression, could easily be labeled “fake news” by AI. That’s because, in the Chinese Room Experiment, coined by Berkeley philosophy professor John Searle, merely training a parrot to repeat Chinese phrases doesn’t mean the parrot understands Chinese. Errors in the input will lead to errors in output, undetectable to all but those fluent in Chinese. The same goes with AI.
Of course, AI can robotically compare and contrast likes or other digital expressions of opinion. But AI more often than not can’t determine if information is true or untrue without human inputted benchmarks for comparison – such as a consensus which shows that the population of Jerusalem is less than 931,756. Consequently, a mix of AI with human input is likely the best approach to combatting fake news and deep fake content.
Facebook comments