The following blog post was written by Information Professionals Association members Dr. Paul Lieber and Colonel (Ret.) Matthew Couglin (USMC). We welcome blog submissions from IPA members. Click here to Join IPA.
Contrary to popular belief, artificial intelligence (AI) and sister machine learning (ML) are not cure-all capabilities to peer-competitor information dominance and/or disinformation and misinformation initiatives. These capabilities don’t solve any specific challenge in isolation, and employment of AI and ML does not have to be a new program or tool suite. Rather, it can be ‘baked’ into existing information-driven processes.
Moreover, the solution to U.S. Government (USG) AI and ML implementation is a matter of changing paradigms, not people. Thus, despite endless efforts to create meaningful AI/ML capability, synchronization, and/or inter-agency organizations, these will all fail by default. It is not for lack of effort, but rather a complete misunderstanding on proper implementation from the start. Specifically, putting more interested parties in a room can produce camaraderie and conformity but – with lack of understanding on how to implement and integrate – AI/ML (for the sake of AI/ML) will ultimately lead to no tangible effect.
Also, and within USG, ‘AI’ and ‘ML’ are often incorrectly defined when writing critical policy and when justifying funding for proposed billion-dollar data driven programs. The latter typically lacks a clear vision to meet overall objectives, especially in addressing and assessing effectiveness against key adversaries.
To be clear, this isn’t about agreeing on a specific definition for USG AI and ML. It’s about building an understanding that AI and ML are not magical “silver bullets” to produce better measures of performance or effectiveness for information operations/warfare efforts. AI/ML applications themselves do not ‘reason’ for information intervention types or define actions to be taken and/or actions to be avoided. Third, they likewise cannot yield smoking guns of hidden networks of nefarious actors secretly employing information bombs against the United States. Their implementation can, however, provide essential insight that could very well lead to uncovering such unknown activities.
If properly employed, AI and ML are potentially invaluable workflow and analysis tools for USG to structure, merge, and objectively assess all available USG information warfare data – a critical missing link to its recent and previous military campaigns (i.e Afghanistan, Iraq). These data can include the social media landscape, secondary analysis reports, Geospatial Intelligence (GEOINT) phenomena, the cyber and physical domains, and Signals Intelligence (SIGINT).
In tech terms: If properly configured and analyst aligned, employed AI and ML will semi-/automatically apply mathematical models to locate contextual associations between relevant actors and issues and amongst literally all accompanying data. Good ML ensures that new data strengthens the relevance and confidence of applied AI algorithms. In tandem, subject matter analysts and experts continuously define and refine AI weights to ensure accuracy of outcomes. End results should be displayed in temporal fashion, conveying critical strategic relationships in a physical/information area and over time.
Looking back: Imagine if the U.S. Department of Defense (DoD) applied these measures to the Afghanistan problem set earlier on in Operation Enduring Freedom (OEF)? Backed by a complete and intelligent data picture, how much better could DoD have been in measuring and then weakening Taliban effectiveness and stability…then parlaying this knowledge toward providing the Government of Afghanistan with the backing and security to support its future longevity?
Logistically, AI/ML tools cannot nor should not aspire to replicate or replace existing data driven and/or research efforts. AI and ML tools need existing research programs to guide and feed into their algorithms. Second, additional capability should never require relevant personnel to be bogged down with yet another system to care and feed. Workflows need to be carefully designed to ensure complementary effect. Toward this end, AI and ML capability should be linked through application programming interface (API) entry/exit points across tools to ensure all gathered and created data are always engaging with each other.
There is, of course, no perfect AI or ML-based algorithm or tool to develop or purchase, thus no ideal solution. To solve this problem requires a team of subject matter experts, data and social scientists, and computer scientists and engineers to devise an objective system of joint analysis (aligned by/with ongoing operations/that can truly answer the ‘so what’ questions leadership across USG are demanding). It also requires a dramatic acceptance that cognitive security efforts and outcomes do not exist in a vacuum, thus cannot perfectly point to nor cause behaviors. Peer competitors should no longer be allowed to drive USG information-driven strategies. They understand our playbook before we react, capitalize on USG procedural shortcomings, and brazenly counter-message social media efforts derived from only parts of a data picture.
About the Authors:
Paul Lieber, Ph.D, is TPG’s Chief Scientist (paul.lieber@tpginc.com)
Colonel (Ret.) Matthew Coughlin (USMC), is a Cyber and Information Warfare Consultant for Peraton (mcoughli@peraton.com)