Over the past decade, the world has experienced a technological revolution fueled by machine learning (ML). Algorithms remove decision fatigue from buying books and choosing music, as well as the work of turning on lights and driving, allowing humans to focus on activities more likely to optimize their sense of being. happiness. Futurists are now looking to bring ML platforms to more complex aspects of human society, especially combat and policing.
Moralists and tech skeptics aside, this development is inevitable, given the need to make quick security decisions in a world overloaded with information. But as ML-powered weapon platforms replace human soldiers, the risk of governments misusing ML increases. Citizens of liberal democracies can and should demand that governments pushing for the creation of intelligent machines for war include provisions maintaining the moral frameworks that guide their armies.
In his popular book “The End of History”, Francis Fukuyama summarized the debates on the ideal political system to achieve human freedom and dignity. From his perspective in mid-1989, months before the unexpected fall of the Berlin Wall, no other system like democracy and capitalism could generate wealth, lift people out of poverty and defend human rights; communism and fascism had both failed, creating cruel autocracies that oppressed people. Without realizing it, Fukuyama prophesied the proliferation of democracy across the world. Democratization happened quickly thanks to field efforts in Asia, Eastern Europe and Latin America.
These transitions, however, would not have been possible if the military had not acquiesced in these reforms. In Spain and Russia, the military attempted a coup before recognizing the dominant political desire for change. Instead, China chose to wipe out the reformers.
The idea that the military has a veto power may seem incongruous to citizens of consolidated democracies. But in societies in transition, the military often has the final say on reform because of its symbiotic relationship with government. In contrast, consolidated democracies benefit from the logic of the Clausewitz trinity, where there is a clear division of labor between the people, the government and the military. In this model, the people elect governments to make decisions for the general good of society while providing recruits for the military tasked with executing government policy and protecting public freedom. The Trinity, however, relies on a human army with a moral character that stems from its origins among the people. The military can refuse orders that harm the public or represent bad policy that could lead to the creation of a dictatorship.
ML risks destabilizing the trinity by removing the human element from the armed forces and subsuming them directly to the government. Developments in ML have created new weapon platforms that rely less and less on humans, as new combat machines are capable of providing security or assassinating targets with only superficial human supervision. The cadre of machines operating without human involvement risks creating a dystopian future where political reform will become improbable, as governments will no longer have human military personnel preventing them from opening fire on reformers. These dangers are evident in China, where the government has no qualms about deploying ML platforms to monitor and control its population while committing genocide.
In the public domain, there is some recognition of these dangers of ML abuses for national security. But there has been no substantive debate on how ML might shape democratic governance and reform. There is no bad reason for this. Rather, it is that many of those who develop ML tools have a background in STEM and lack an understanding of broader social issues. On the government side, executives of agencies funding ML research often don’t know how to use ML outputs, instead relying on developers to explain what they see for them. The measure of government success is whether it ensures the security of society. Throughout this process, civilians act as spectators, unable to question the process of designing ML tools used in warfare.
In the short term, this is fine because there are no entire armies made of robots, but the competitive advantage offered by mechanized combat not limited by frail human bodies will make intelligent machines essential to the world. future of war. Additionally, these terminators will need an entire infrastructure of ML-powered satellites, sensors, and information platforms to coordinate responses to advancements and setbacks in the battlefield, further reducing the role of humans. This will only amplify the power that governments have to oppress their societies.
The risk that democratic societies create tools that lead to this pessimistic outcome is high. The United States is in a ML arms race with China and Russia, both of which are developing and exporting their own ML tools to help dictatorships stay in power and freeze history.
However, civil society has a space to fit into ML. ML passes and fails based on training data used for algorithms, and civil society can work with governments to choose training data that optimizes the enterprise of war while balancing the need to sustain dissent and reform. .
By giving machines moral guarantees, the United States can create tools that instead strengthen the prospects for democracy. Fukuyama’s thesis is only valid in a world where humans can exercise free will and reform their governments through discussion, debate, and elections. The United States, in confronting its authoritarian rivals, should not create tools that precipitate the end of democracy.
Christopher Wall is a social science researcher for Giant oak, a counterterrorism instructor for Naval Special Warfare, a lecturer on national security statistics at Georgetown University and the co-author of the recent book, “The Future of Terrorism: ISIS, al-Qaida and the Alt-Right. “The views of the author do not necessarily reflect those of Giant Oak.