The Autocrat’s New Tool Kit

Richard Fontaine and Kara Fredericks, March 15th, 2019, Center for a New American Security
https://www.cnas.org/publications/commentary/the-autocrats-new-tool-kit

Chinese authorities are now using the tools of big data to detect departures from “normal” behavior among Muslims in the country’s Xinjiang region—and then to identify each supposed deviant for further state attention. The Egyptian government plans to relocate from Cairo later this year to a still-unnamed new capital that will have, as the project’s spokesman put it, “cameras and sensors everywhere,” with “a command center to control the entire city.” Moscow already has some 5,000 cameras installed with facial-recognition technology, and it can match faces of interest to the Russian state to photos from passport databases, police files and even VK, the country’s most popular social media platform.

As dystopian and repressive as these efforts sound, just wait. They may soon look like the quaint tactics of yesteryear. A sophisticated new set of technological tools—some of them now maturing, others poised to emerge over the coming decade—seem destined to wind up in the hands of autocrats around the world. They will allow strongmen and police states to bolster their internal grip, undermine basic rights and spread illiberal practices beyond their own borders. China and Russia are poised to take advantage of this new suite of products and capabilities, but they will soon be available for export, so that even second-tier tyrannies will be able to better monitor and mislead their populations.

Many of these advances will give autocrats new ways to spread propaganda, both internally and externally. One key technology is automated microtargeting. Today’s microtargeting relies on personality assessments to tailor content to segments of a population, based on their psychological, demographic or behavioral characteristics. Russia’s Internet Research Agency reportedly conducted this kind of research during the 2016 U.S. presidential race, harvesting data from Facebook to craft specific messages for individual voters based in part on race, ethnicity and identity. The more powerful microtargeting is, the easier it will be for autocracies to influence speech and thought.

Until now, such efforts have been mostly limited to the commercial world and have focused on precision advertising: Facebook itself conducts microtargeting, for instance, and Google labeled users “left-leaning” or “right-leaning” for political advertisers in the 2016 election. But private firms are developing artificial intelligence that can automate this customization for whole populations, and government interest is sure to follow. In an October 2018 discussion at the Council on Foreign Relations, Jason Matheny, the former director of the U.S. government’s Intelligence Advanced Research Projects Activity, cited this kind of “industrialization of propaganda” as one reason to beware of the “exuberance in China and Russia towards AI.”

AI-driven applications will soon allow authoritarians to analyze patterns in a population’s online activity, identify those most susceptible to a particular message and target them more precisely with propaganda. In a widely viewed TED Talk in 2017, techno-sociologist Zeynep Tufekci described a world where “people in power [use] these algorithms to quietly watch us, to judge us and to nudge us, to predict and identify the troublemakers and the rebels.” The result, she suggests, may be an authoritarianism that transforms our private screens into “persuasion architectures at scale…to manipulate individuals one by one, using their personal, individual weaknesses and vulnerabilities.” This is likely to mean far more effective “influence campaigns,” aimed at either citizens of authoritarian countries or those of democracies abroad.

Emerging technologies will also change the ways that autocrats deliver propaganda. State-controlled online “bots” (automated accounts) already plague social media. During Russia’s 2014 invasion of Crimea and in the months afterward, for example, researchers at New York University found that fully half of the tweets from accounts that focused on Russian politics were bot-generated. The October 2018 murder of Washington Post columnist Jamal Khashoggi prompted a surge in messaging from pro-regime Saudi bots.

Read the full article and more in The Wall Street Journal.