. As an advocate of structural realism, Gray[45] questions the role of arms control, as he views the balance of power as a self-sufficient and self-perpetuating system of international security that is more preferable. At key moments, the cooperation among Afghan politicians has been maintained with a persuasive nudge from U.S. diplomats. Members of the Afghan political elite have long found themselves facing a similar trade-off. When there is a strong leader present, players are likely to hunt the animal the leader chooses. Hunting stags is quite challenging and requires mutual cooperation. Not wanting to miss out on the high geopolitical drama, Moscow invited Afghanistans former president, Hamid Karzai, and a cohort of powerful elitesamong them rivals of the current presidentto sit down with a Taliban delegation last week. . (required), 2023 Cornell University Powered by Edublogs Campus and running on blogs.cornell.edu, The Stag Hunt Theory and the Formation Social of Contracts, http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf. Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. The theory outlined in this paper looks at just this and will be expanded upon in the following subsection. On the other hand, real life examples of poorly designed compensation structures that create organizational inefficiencies and hinder success are not uncommon. [41] AI, being a dual-use technology, does not lend itself to unambiguously defensive (or otherwise benign) investments. Additionally, the defector can expect to receive the additional expected benefit of defecting and covertly pursuing AI development outside of the Coordination Regime. In this game "each player always prefers the other to play c, no matter what he himself plays. Indeed, this gives an indication of how important the Stag Hunt is to International Relations more generally. b Since the payoff of hunting the stags is higher, these interactions lead to an environment in which the Stag Hunters prosper. If all the hunters work together, they can kill the stag and all eat. Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see Skyrms 2004). Like the hunters in the woods, Afghanistans political elites have a great deal, at least theoretically, to gain from sticking together. This is the third technology revolution., Artificial intelligence is the future, not only for Russia, but for all humankind. Under the assumption that actors have a combination of both competing and common interests, those actors may cooperate when those common interests compel such action. %%EOF the 'inherent' right to individual and collective self-defence recognized by Article 51 of the Charter and enforcement measures involving the use of force sanctioned by the Security Council under Chapter VII thereof. Two, three, four hours pass, with no trace. Finally, Jervis[40] also highlights the security dilemma where increases in an actors security can inherently lead to the decreased security of a rival state. Both nations can benefit by working together and signing the agreement. [44] Thomas C. Schelling & Morton H. Halperin, Strategy and Arms Control. Overall, the errors overstated the companys net income by 40%. The question becomes, why dont they always cheat? Use integration to find the indicated probabilities. For instance if a=10, b=5, c=0, and d=2. And, seeing how successful the stag hunters are, most hare hunters will convert to stag hunters. Moreover, the usefulness of this model requires accurately gauging or forecasting variables that are hard to work with. As a result of this, security-seeking actions such as increasing technical capacity (even if this is not explicitly offensive this is particularly relevant to wide-encompassing capacity of AI) can be perceived as threatening and met with exacerbated race dynamics. The stag hunters are likely to interact with other stag hunters to seek mutual benefit, while hare hunters rarely care with whom they interact with since they rather not depend on others for success. Table 7. Here, this is expressed as P_(h|A or B) (A)h_(A or B). Finally, in a historical survey of international negotiations, Garcia and Herz[48] propose that international actors might take preventative, multilateral action in scenarios under the commonly perceived global dimension of future potential harm (for example the ban on laser weapons or the dedication of Antarctica and outer space solely for peaceful purposes). Using the payoff matrix in Table 6, we can simulate scenarios for AI coordination by assigning numerical values to the payoff variables. Posted June 3, 2008 By Presh Talwalkar. [30], Today, government actors have already expressed great interest in AI as a transformative technology. Examples of the stag hunt [ edit] The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. f(x)={332(4xx2)if0x40otherwisef(x)= \begin{cases}\frac{3}{32}\left(4 x-x^2\right) & \text { if } 0 \leq x \leq 4 \\ 0 & \text { otherwise }\end{cases} Within the arms race literature, scholars have distinguished between types of arms races depending on the nature of arming. "Game Theory for International Accords." This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill. Additional readings provide insight on arms characteristics that impact race dynamics. This section defines suggested payoffs variables that impact the theory and simulate the theory for each representative model based on a series of hypothetical scenarios. The corresponding payoff matrix is displayed as Table 14. 7into the two-person Stag Hunt: This is an exact version of the8 informal arguments of Hume and Hobbes. THE STAG HUNT THE STAG HUNT T HE Stag Hunt is a story that became a game. The academic example is the Stag Hunt. Some observers argue that a precipitous American retreat will leave the countryand even the capital, Kabulvulnerable to an emboldened, undeterred Taliban given the limited capabilities of Afghanistans national security forces. [5] Stuart Armstrong, Nick Bostrom, & Carl Shulman, Racing to the precipice: a model of artificial intelligence development, AI and Society 31, 2(2016): 201206. In the current Afghan context, the role of the U.S. military is not that of third-party peacekeeper, required to guarantee the peace in disinterested terms; it has the arguably less burdensome job of sticking around as one of several self-interested hunters, all of whom must stay in the game or risk its collapse. Hunting stags is most beneficial for society but requires a . He found various theories being proposed, suggesting a level analysis problem. Any individual move to capture a rabbit will guarantee a small meal for the defector but ensure the loss of the bigger, shared bounty. Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. Each can individually choose to hunt a stag or hunt a hare. (lljhrpc). This is taken to be an important analogy for social cooperation. <> The Stag Hunt UCI School of Social Sciences, Example of stag hunt in international relations, on Example of stag hunt in international relations, https://en.wikipedia.org/wiki/Stag_Hunt_Mosaic, example of application letter for sales representative, Example of selection criteria planning and organising, Example sentences with the word detrimental, Manual de access 2010 avanzado pdf en espanol gratis. Individuals, factions and coalitions previously on the same pro-government side have begun to trade accusations with one another. In this section, I survey the relevant background of AI development and coordination by summarizing the literature on the expected benefits and harms from developing AI and what actors are relevant in an international safety context. Although Section 2 describes to some capacity that this might be a likely event with the U.S. and China, it is still conceivable that an additional international actor can move into the fray and complicate coordination efforts. Press: 1992). The game is a prototype of the social contract. Gray[36] defines an arms race as two or more parties perceiving themselves to be in an adversary relationship, who are increasing or improving their armaments at a rapid rate and structuring their respective military postures with a general attain to the past, current, and anticipated military and political behaviour of the other parties.. However, anyone who hunts rabbit can do sosuccessfullyby themselves, but with a smaller meal. If the United States beats a quick path to the exits, the incentives for Afghan power brokers to go it alone and engage in predatory, even cannibalistic behavior, may prove irresistible. If one side cooperates with and one side defects from the AI Coordination Regime, we can expect their payoffs to be expressed as follows (here we assume Actor A defects while Actor B cooperates): For the defector (here, Actor A), the benefit from an AI Coordination Regime consists of the probability that they believe such a regime would achieve a beneficial AI times Actor As perceived benefit of receiving AI with distributional considerations [P_(b|A) (AB)b_Ad_A]. Anarchy in International Relations Theory: The Neorealist-Neoliberal Debate Created Date: 20160809151831Z Table 6 Payoff Matrix for AI Coordination Scenarios, Where P_h (A)h [D,D]>P_h (A)h [D,C]>P_h (AB)h [C,C]. (1) the responsibility of the state to protect its own population from genocide, war crimes, ethnic cleansing and crimes against humanity, and from their incitement; What is the difference between structural and operational conflict prevention? SUBJECT TERMS Game Theory, Brinkmanship, Stag Hunt, Taiwan Strait Issue, Cuban Missile Crisis 16. The matrix above provides one example. [25] For more on the existential risks of Superintelligence, see Bostrom (2014) at Chapters 6 and 8. The following subsection further examines these relationships and simulates scenarios in which each coordination model would be most likely. I refer to this as the AI Coordination Problem. Table 11. [56] Downs et al., Arms Races and Cooperation., [57] This is additionally explored in Jervis, Cooperation Under the Security Dilemma.. [51] An analogous scenario in the context of the AI Coordination Problem could be if both international actors have developed, but not yet unleashed an ASI, where knowledge of whether the technology will be beneficial or harmful is still uncertain. Additionally, both actors perceive the potential returns to developing AI to be greater than the potential harms. Huntington[37] makes a distinction between qualitative arms races (where technological developments radically transform the nature of a countrys military capabilities) and quantitative arms races (where competition is driven by the sheer size of an actors arsenal). 0000001656 00000 n
0000001840 00000 n
It comes with colossal opportunities, but also threats that are difficult to predict. Solving this problem requires more understanding of its dynamics and strategic implications before hacking at it with policy solutions. In this model, each actors incentives are not fully aligned to support mutual cooperation and thus should present worry for individuals hoping to reduce the possibility of developing a harmful AI. But the moral is not quite so bleak. This additional benefit is expressed here as P_(b|A) (A)b_A. Together, these elements in the arms control literature suggest that there may be potential for states as untrusting, rational actors existing in a state of international anarchy to coordinate on AI development in order to reduce future potential global harms. 0
Finally, a Stag Hunt occurs when the returns for both actors are higher if they cooperate than if either or both defect. [17] Michele Bertoncello and Dominik Wee, Ten ways autonomous driving could redefine the automotive world, Mcikinsey&Company, June 2015, https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/ten-ways-autonomous-driving-could-redefine-the-automotive-world (suggesting that driverless cars could reduce traffic fataltiies by up to 90 percent). By failing to agree to a Coordination Regime at all [D,D], we can expect the chance of developing a harmful AI to be highest as both actors are sparing in applying safety precautions to development. As new technological developments bring us closer and closer to ASI[27] and the beneficial returns to AI become more tangible and lucrative, a race-like competition between key players to develop advanced AI will become acute with potentially severe consequences regarding safety. [49] For example, by defecting from an arms-reduction treaty to develop more weapons, an actor can gain the upper hand on an opponent who decides to uphold the treaty by covertly continuing or increasing arms production. This section defines suggested payoffs variables that impact the theory and simulate the theory for each representative model based on a series of hypothetical scenarios. In this paper, I develop a simple theory to explain whether two international actors are likely to cooperate or compete in developing AI and analyze what variables factor into this assessment. These strategies are not meant to be exhaustive by any means, but hopefully show how the outlined theory might provide practical use and motivate further research and analysis. If security increases cant be distinguished as purely defensive, this decreases instability. It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. As described in the previous section, this arms race dynamic is particularly worrisome due to the existential risks that arise from AIs development and call for appropriate measures to mitigate it. David Hume provides a series of examples that are stag hunts. Your application of the Prisoners Dilemma (PD) game to international trade agreements raises a few very interesting and important questions for the application of game theory to real-life strategic situations. 0000006229 00000 n
Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of both the likelihood that the actor themselves will develop a harmful AI times that harm, as well as the expected harm of their opponent developing a harmful AI. and other examples to illustrate how game theory might be applied to understand the Taiwan Strait issue. (Pergamon Press: 1985). This is expressed in the following way: The intuition behind this is laid out in Armstrong et al.s Racing to the precipice: a model of artificial intelligence.[55] The authors suggest each actor would be incentivized to skimp on safety precautions in order to attain the transformative and powerful benefits of AI before an opponent. 714 0 obj If both choose to row they can successfully move the boat. Julian E. Barnes and Josh Chin, The New Arms Race in AI, Wall Street Journal, March 2, 2018, https://www.wsj.com/articles/the-new-arms-race-in-ai-1520009261; Cecilia Kang and Alan Rappeport, The New U.S.-China Rivalry: A Technology Race, March 6, 2018, https://www.nytimes.com/2018/03/06/business/us-china-trade-technology-deals.html. Back to the lionesses in Etosha National Park . [54] In a bilateral AI development scenario, the distribution variable can be described as an actors likelihood of winning * percent of benefits gained by winner (this would be reflected in the terms of the Coordination Regime). Additionally, Koubi[42] develops a model of military technological races that suggests the level of spending on research and development varies with changes in an actors relative position in a race. In this example, each player has a dominantstrategy. To what extent are today's so-called 'new wars' (Mary Kaldor) post Clausewitzean in nature? 0000000696 00000 n
The closestapproximationof this in International Relations are universal treaties, like the KyotoProtocolenvironmental treaty. Formally, a stag hunt is a game with two pure strategy Nash equilibriaone that is risk dominant and another that is payoff dominant. Finally, there are a plethora of other assuredly relevant factors that this theory does not account for or fully consider such as multiple iterations of game playing, degrees of perfect information, or how other diplomacy-affecting spheres (economic policy, ideology, political institutional setup, etc.) [6] See infra at Section 2.2 Relevant Actors. Carol M. Rose argues that the stag hunt theory is useful in 'law and humanities' theory. [28] Armstrong et al., Racing to the precipice: a model of artificial intelligence development.. 0000004572 00000 n
Evidence from AI Experts (2017: 11-21), retrieved from http://arxiv.org/abs/1705.08807. Despite the large number of variables addressed in this paper, this is at its core a simple theory with the aims of motivating additional analysis and research to branch off. December 5, 2010 at 2:49 pm. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium. Since this requires that the fish have no way to escape, it requires the cooperation of many orcas. As stated, which model (Prisoners Dilemma, Chicken, Deadlock, or Stag Hunt) you think accurately depicts the AI Coordination Problem (and which resulting policies should be pursued) depends on the structure of payoffs to cooperating or defecting. Different social/cultural systems are prone to clash. The 18th century political philosopher Jean-Jacques Rousseau famously described a dilemma that arises when a group of hunters sets out in search of a stag: To catch the prized male deer, they must cooperate, waiting quietly in the woods for its arrival. PRICE CODE 17. Also, trade negotiations might be better thought of as an iterated game the game is played repeatedly and the nations interact with each other more than once over time. No payoffs (that satisfy the above conditions including risk dominance) can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. [23] United Nations Office for Disarmament Affairs, Pathways to Banning Fully Autonomous Weapons, United Nations, October 23, 2017, https://www.un.org/disarmament/update/pathways-to-banning-fully-autonomous-weapons/. The Stag Hunt represents an example of compensation structure in theory. Why do trade agreements even exist? What is the difference between ethnic cleansing and genocide? [29] There is a scenario where a private actor might develop AI in secret from the government, but this is unlikely to be the case as government surveillance capabilities improve. The current landscape suggests that AI development is being led by two main international actors: China and the United States. Under which four circumstances does the new norm of the 'Responsibility to Protect' arise? However, a hare is seen by all hunters moving along the path. One example payoff structure that results in a Deadlock is outlined in Table 9. Half a stag is better than a brace of rabbits, but the stag will only be brought down with a . The dynamics changes once the players learn with whom to interact with. In the US, the military and intelligence communities have a long-standing history of supporting transformative technological advancements such as nuclear weapons, aerospace technology, cyber technology and the Internet, and biotechnology. Using game theoretical representations of state preferences, Downs et al. Such a Coordination Regime could also exist in either a unilateral scenario where one team consisting of representatives from multiple states develops AI together or a multilateral scenario where multiple teams simultaneously develop AI on their own while agreeing to set standards and regulations (and potentially distributive arrangements) in advance. 8,H7kcn1qepa0y|@. As a result, this tradeoff between costs and benefits has the potential to hinder prospects for cooperation under an AI Coordination Regime. Several animal behaviors have been described as stag hunts. Additionally, the feedback, discussion, resource recommendations, and inspiring work of friends, colleagues, and mentors in several time zones especially Amy Fan, Carrick Flynn, Will Hunt, Jade Leung, Matthijs Maas, Peter McIntyre, Professor Nuno Monteiro, Gabe Rissman, Thomas Weng, Baobao Zhang, and Remco Zwetsloot were vital to this paper and are profoundly appreciated. Finally, in the game of chicken, two sides race to collision in the hopes that the other swerves from the path first. Name four key thinkers of the theory of non-violent resistance, Gandhi, martin luther king, malcon X, cesar chavex. 0000016685 00000 n
Finally, Table 13 outlines an example payoff structure that results in a Stag Hunt.