The rise of large language models (LLMs) and multimodal foundation models has already begun to reshape the character of warfare. For evidence, look no further than the battlefields of Russia’s war on Ukraine. During “Operation Spiderweb” in June, for example, Ukrainian quadcopters switched to autonomous navigation assisted by artificial intelligence (AI) to strike multiple Russian airfields. After standard GPS and communication links were disabled by Russian jammers, built-in sensors and pre-programmed decision-making meant that “backup AI targeting” took over. The strike, Ukraine’s longest-range assault of the conflict to date, resulted in the destruction of billions of dollars’ worth of Russian aircraft.
But automation and data-processing speed—image identification, logistics, and pattern detection—are only one part of the story. An arguably more significant transformation is underway, toward synthetic cognition within AI systems.
Adversary simulation
The US Army’s Mad Scientist Initiative and NATO’s Strategic Foresight Analysis program have both identified AI-based adversary simulation as critical for preparing joint forces for contested decision environments. This involves mapping adversary biases, illuminating internal cognitive blind spots, and forecasting narrative-driven escalations. The idea is to promote what has been called “strategic empathy”—the disciplined effort to understand how adversaries perceive their interests, threats, and opportunities—and to reduce inadvertent escalation risks.
Everyday AI chatbots such as GPTs are already spontaneously displaying the rudiments of theory of mind—that is, the ability to infer that others can hold beliefs different from one’s own. This capability has been demonstrated in LLMs through successful completion of false-belief tasks, such as recognizing that a person can search for an object where they mistakenly believe it to be, rather than where it actually is—a benchmark long associated with childhood cognitive development and a function regarded as unique to the species. In military contexts, if carefully constrained and validated, such capabilities are likely to soon allow for real-time simulation of adversarial logic, strategic ambiguity, and reputational calculus.
The capacity to accurately interpret and anticipate adversaries’ behaviors and strategic intent may prove to be the ultimate determinant of cognitive overmatch, understood here as the demonstrable ability to emulate, predict, and outpace adversary decision cycles. In practice, this is measured in reduced decision time, greater accuracy in escalation forecasting, and validated against observed behavior in falsifiable scenario outcomes. In an era defined by the contest of perceptions, safely and successfully integrating synthetic cognition into defense capabilities may well prove decisive. As such, embedding cultural, historical, and ideological nuance into cognitive-emulative systems will be important to ensure strategic superiority for the United States. After all, China is already reportedly investing in culturally informed AI frameworks for military use.
Taught versus nurtured consciousness
The crux of efforts to simulate adversarial reasoning emerges from a cognitive duality between taught consciousness and nurtured consciousness. This is not standard AI terminology, but a conceptual framework we have introduced to distinguish between two modes of reasoning. Taught consciousness refers to structured learning, facts, and procedural logic. Nurtured consciousness, by contrast, arises from culture, history, trauma, identity, and emotional reinforcement—the forces that shape how an actor interprets risk, legitimacy, and legacy.
To “think better,” AI must move beyond structured data alone; it must incorporate historical memory, cultural worldviews, symbolic interpretations, and ideological drivers of conflict. For example, a People’s Liberation Army (PLA) commander influenced by the 1979 Sino-Vietnam War may exhibit caution in mountainous terrain, a detail invisible to most automated models but accessible to LLMs trained on PLA memoirs, doctrine, and historiography.
[Read More…]