- Privacy Risk Report - https://privacyriskreport.com -

Are “Deepfakes” The Next Privacy Threat Facing Insurers And Insureds?

The current roster of threats–ransomware, phishing schemes and hacking–are well understood at this point. Of course, these threats are constantly evolving as we live in a world where criminals get bored quickly and need to move on. The newest privacy threat may involve elaborately faked videos, called “deepfakes,” which may be used to disparage people. A manipulated video of House Speaker Nancy Pelosi recently went viral was slowed down to make it appear she was slurring her words [1] following a meeting with President Donald Trump. This incident was the first time the public came face to face with this new threat and how believable and potentially damaging these fake videos can be. Further, this technology has gone from being almost unknown to the public to a viable threat within months. While it is still unclear how this threat manifests, it is safe to assume that the public will be worried about this technology within the next few years and insurers should immediately begin considering how this threat may impact their insureds.

A recent Washington Post article entitled Deepfakes Are Coming. We are Not Ready, addresses recent technological developments that may ultimately impact privacy. First, the article provides the following overview of this growing threat [2]:

“Deepfakes are created by something called a “generative adversarial network”, or GAN. GANs are technically complex but operate on a simple principle. There are two automated rivals in the system: a forger and a detective. The forger tries to create fake content while the detective tries to figure out what is authentic and what is forged. Over each iteration, the forger learns from its mistakes. Eventually, the forger gets so good that it is difficult to tell the difference between fake and real content. And when that happens with deepfakes, those are the videos that are likely to fool humans, too.”

The Washington Post article relies on Hany Farid, a professor of computer science at Dartmouth College, to access this growing threat:

“But, as Farid worries, perhaps the larger threat comes from the destruction of democratic accountability. “Because if it is, in fact, the case that almost anything can be faked well, then nothing is real.” Once deepfakes exist, politicians can pretend that any disqualifying behavior has actually been created by a neural network. As we’ve seen in the Trump era, with a highly polarized electorate, millions will believe what they are told by a politician they support, even when there is overwhelming evidence to the contrary.”

Beyond the potential impact of international relations and the electoral process, the implications of this developing technology may have on private individuals is easily seen. As we have seen many times, a technology that starts out being used by the most tech-savvy government actors filters down to low-tech criminals. While still in its early stages, this technology is quickly (and unfortunately) becoming convincing. For example, a video posted to YouTube [3] shows how this technology can be used to take a still picture of Marilyn Monroe or Albert Einstein and bring them to life.

It is not hard to imagine how this technology can be used to release a video that will make a person’s life difficult or be used to extoll a ransom. Over the next few years, we can expect to see a number of unique legal questions concerning this technology.  For example, how will legal counsel advise the president of a corporation that has been targeted with a deepfake video?  Courts will immediately face issues as how to authenticate all videos to allow them to become admissible evidence. And, in the same way, insurers needed time to properly assess the risk posed by prior threats, insurers will face similarly difficult questions even if they decide to provide coverage for the damage done by deepfake videos. That is, how can an insurer be certain the video is indeed fake and therefore, would trigger such coverage? It is not difficult to foresee the huge potential business interruption losses that could result from a deepfake video. While we may have some time before Deepfake videos begin causing damage, we need to immediately begin considering these difficult questions before deepfake videos become a major threat.