Posted by on April 3, 2019 2:31 pm
Tags: , , , ,
Categories: Top Page Links

 


 

By Daniel McGroarty

TES GeoPolicy Editor

 

“Ceci n’est pas une pipe.”  The iconic impact of Rene Magritte’s most-posterized work is due not to the painting itself, but to the sophomoric witticism in the subscript:  It is a painting of a pipe, and not the pipe itself.

 

But that’s not the painting’s title.  Magritte called the work “The Treachery of Images.”

 

And in our AI Age, that treachery is taking on a new lethality.

 

Consider the digital image captures that give us “maps” like Google Earth, that offer us zoom-lens access to nearly every square foot of Earth at the click of a key.  We can tour the world, from our laptop screen.

 

Which is wonderful – amusing, entertaining and even informative.  But what if what we’re seeing – isn’t real?  That’s the question posed by a disturbing new piece at DefenseOne “The Newest AI-Enabled Weapon: ‘Deep-Faking’ Photos of the Earth.”

 

DefenseOne Tech Editor Patrick Tucker lasers in on an AI-enabled capability called GANs – generative adversarial networks – “to trick computers into seeing objects in landscapes or in satellite images that aren’t there.”  Leaders in the GANs technique?  That would be China, “says Todd Myers, automation lead and Chief Information Officer in the Office of the Director of Technology at the National Geospatial-Intelligence Agency (NGA).”

 

GANs goes something like this:  AI “learns” how to identify land features and built objects by scanning millions of digital images.  But as it learns that thousands of pixels arranged in a specific pattern represent a bridge, or a building – or a military base and airstrip – it can easily be reversed, so that one can assemble pixels into a pattern that AI will “see” as the desired object, once the pixels are popped into enough open source images, like those found on Google Earth and elsewhere.  These “deep fakes,” insinuated into the digital bloodstream, take on a literal life of their own.

 

This is more insidious than the “deep fakes” like FakeApp, purporting to show video of Barack Obama in the Oval Office spouting off-color banter mouthed by his alter-ego Jordan Peele.  When everything seen and said is digitized, it gets easier for someone to stitch together words we have once said into sentences we have never uttered.  Imagine the incineration of political candidates via a video “unearthed” from an archive that is in reality an AI fake.  But as Tucker notes, “When it comes to deep fake videos of people, biometric indicators like pulse and speech can defeat the fake effect.”

 

So far, anyway.

 

But a faked landscape, Tucker notes, as a static, non-living entity “…isn’t vulnerable to the same techniques.”

 

The opportunities for mayhem are endless.  We may never be able to believe our eyes again.  Consider:  A third nation plants fake images of a troop movement or missile emplacement that causes two rival nations at hair-trigger alert to launch a mutually destructive conflict.

 

And then there’s the obverse of GANs’ additive images:  Using AI to pixel-over with benign images real military facilities or troop deployments.

 

As DefenseOne quotes Myers, the NGA CIO:  “just a handful of expertly manipulated data sets entered into the open-source image supply line could create havoc.”  I’ll say.

 

According to Andrew Hallman, “who heads the CIA’s Digital Directorate…: ‘We are in an existential battle for truth in the digital domain’….”

 

From what we’re learning about AI, in that existential battle, the smart money’s not on truth.

 

Ceci n’est pas un missile balistique.  Or is it?

 

#  #  #

 

Daniel McGroarty, TES editor of GeoPolicy, served in senior positions in the White House and Department of Defense.