Deep-fake technology is developing faster than we’re prepared to deal with it. For the last few years it’s been getting better, as software has improved and as internet content has proliferated. What started as a janky way to copy and paste celebrity faces onto porn videos, evolved into a means of comedy, and eventually into an artificially intelligent, machine-learning means of political and social manipulation.
That last iteration of deep-fake technology is the one that people are most afraid of. It’s all fun and games until this bizarre form of counterfeit reality spills out of porn and entertainment, and starts pulling on the strings of society and civilization as we know it. At that point, deep-fake tech could threaten our very understanding of the world around us.
Which is why the FBI’s recent press release is so disconcerting.
On March 10th, the FBI released a warning to America’s media: Sometime in the next 12-18 months, the US will experience a serious deep-fake cyber attack on a massive scale, they say. (Though, they provide no evidence or explanation for that assertion.)
“The FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft.” The FBI wrote in their summary.
They go on to explain that since 2019 their bureau has identified multiple foreign “synthetic content” campaigns aimed at promoting their own agenda — be it financial, political, or social.
“[Machine Learning]-generated profile images may help malicious actors spread their narratives, increasing the likelihood they will be more widely shared, making the message and messenger appear more authentic to consumers.” The FBI warns. For now, malicious actors are mostly just contorting information in articles and memes — hyping up the kind of rhetoric that grandparents and crazy aunts and uncles love to share and spread online.
Now, however, those “malicious actors” are starting to use much more advanced methods of manipulation, using artificial intelligence and machine learning to actually modify photos and videos to look or sound like something they aren’t. Businesses could be targeted by competitors to ruin reputations and illicit financial impacts; foreign interests could target political campaigns or candidates; rogue hackers could band together to financially target banks and individuals.
And no one would know where or who was behind such deep fake attacks, until it’s already too late. As our news media has proven time and again: errors are almost impossible to redact, once the public has already consumed them. No matter how much damage control is done, at that point it’s a game of salvage.
So, unless people, businesses and government branches start cleaning up their cyber hyenine and taking extra necessary precautions to avoid being duped, or otherwise attacked with “synthetic content,” things could get very confusing over the next 12-18 months.
Fake ads, videos, photos and interviews might start swirling around online media platforms within the next year, according to the FBI.
Leave a Reply