Friday, February 15, 2019

Weaponizing Words



 The fabrication industry is alive and, sadly, well. What we used to equate with cellar operations on Red Square now fuels Fox News, talk radio, even America's venerated Oval Office.

If there was a word to describe the modern fabrication industry it might be "inartful." Trump's tweets, for example, are only convincing to those who need no convincing, the cult of believers and they're a truly dim lot. It is indeed inartful, a clumsy mishmash of brazen lies, baseless rumours and illogic. No one is going to win any Pulitzer prizes for it and yet, to a degree, it works.

Misinformation is powerful and it is dangerous. So dangerous that an artificial intelligence research institute, OpenAI, has decided not to let its latest genie out of the bottle for fear it would be weaponized and used against the American people.

From MIT Technology Review:
"Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air. 
"Russia said it had 'identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.' The White House said it was 'extremely concerned by the Russian violation' of a treaty banning intermediate-range ballistic missiles. 
"The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine."
That story is, in fact, not only fake, but a troubling example of just how good AI is getting at fooling us.

That’s because it wasn’t written by a person; it was auto-generated by an algorithm fed the words “Russia has declared war on the United States after Donald Trump accidentally …” 
The program made the rest of the story up on its own. And it can make up realistic-seeming news reports on any topic you give it. The program was developed by OpenAI, a research institute based in San Francisco. 
The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks. But they soon grew concerned about the potential for abuse. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” says Jack Clark, policy director at OpenAI. 
Clark says the program hints at how AI might be used to automate the generation of convincing fake news, social-media posts, or other text content. Such a tool could spew out climate-denying news reports or scandalous exposés during an election. Fake news is already a problem, but if it were automated, it might be harder to tune out. Perhaps it could be optimized for particular demographics—or even individuals.
...OpenAI does fundamental AI research but also plays an active role in highlighting the potential risks of artificial intelligence. The organization was involved with a 2018 report on the risks of AI, including opportunities for misinformation (see “These are the ‘Black Mirror’ Scenarios that are leading some experts to call for secrecy on AI”).

Orwell, Huxley? Maybe it's time to move them out of the "fiction" section.

2 comments:

John B. said...

In the year of Our Ford ...

The Mound of Sound said...


Yes, John. Sigh.