Gregory A. Daddis
For nearly a decade, American combat soldiers fought in South Vietnam to help sustain an independent, noncommunist nation in Southeast Asia. After U.S. troops departed in 1973, the collapse of South Vietnam in 1975 prompted a lasting search to explain the United States’ first lost war. Historians of the conflict and participants alike have since critiqued the ways in which civilian policymakers and uniformed leaders applied—some argued misapplied—military power that led to such an undesirable political outcome. While some claimed U.S. politicians failed to commit their nation’s full military might to a limited war, others contended that most officers fundamentally misunderstood the nature of the war they were fighting. Still others argued “winning” was essentially impossible given the true nature of a struggle over Vietnamese national identity in the postcolonial era. On their own, none of these arguments fully satisfy. Contemporary policymakers clearly understood the difficulties of waging a war in Southeast Asia against an enemy committed to national liberation. Yet the faith of these Americans in their power to resolve deep-seated local and regional sociopolitical problems eclipsed the possibility there might be limits to that power. By asking military strategists to simultaneously fight a war and build a nation, senior U.S. policymakers had asked too much of those crafting military strategy to deliver on overly ambitious political objectives. In the end, the Vietnam War exposed the limits of what American military power could achieve in the Cold War era.
Spanning countries across the globe, the antinuclear movement was the combined effort of millions of people to challenge the superpowers’ reliance on nuclear weapons during the Cold War. Encompassing an array of tactics, from radical dissent to public protest to opposition within the government, this movement succeeded in constraining the arms race and helping to make the use of nuclear weapons politically unacceptable. Antinuclear activists were critical to the establishment of arms control treaties, although they failed to achieve the abolition of nuclear weapons, as anticommunists, national security officials, and proponents of nuclear deterrence within the United States and Soviet Union actively opposed the movement. Opposition to nuclear weapons evolved in tandem with the Cold War and the arms race, leading to a rapid decline in antinuclear activism after the Cold War ended.
From its inception as a nation in 1789, the United States has engaged in an environmental diplomacy that has included attempts to gain control of resources, as well as formal diplomatic efforts to regulate the use of resources shared with other nations and peoples. American environmental diplomacy has sought to gain control of natural resources, to conserve those resources for the future, and to protect environmental amenities from destruction. As an acquirer of natural resources, the United States has focused on arable land as well as on ocean fisheries, although around 1900, the focus on ocean fisheries turned into a desire to conserve marine resources from unregulated harvesting.
The main 20th-century U.S. goal was to extend beyond its borders its Progressive-era desire to utilize resources efficiently, meaning the greatest good for the greatest number for the longest time. For most of the 20th century, the United States was the leader in promoting global environmental protection through the best science, especially emphasizing wildlife. Near the end of the century, U.S. government science policy was increasingly out of step with global environmental thinking, and the United States often found itself on the outside. Most notably, the attempts to address climate change moved ahead with almost every country in the world except the United States.
While a few monographs focus squarely on environmental diplomacy, it is safe to say that historians have not come close to tapping the potential of the intersection of the environmental and diplomatic history of the United States.
Michael C. C. Adams
On the eve of World War II many Americans were reluctant to see the United States embark on overseas involvements. Yet the Japanese attack on the U.S. Pacific fleet at Pearl Harbor on December 7, 1941, seemingly united the nation in determination to achieve total victory in Asia and Europe. Underutilized industrial plants expanded to full capacity producing war materials for the United States and its allies. Unemployment was sucked up by the armed services and war work. Many Americans’ standard of living improved, and the United States became the wealthiest nation in world history.
Over time, this proud record became magnified into the “Good War” myth that has distorted America’s very real achievement. As the era of total victories receded and the United States went from leading creditor to debtor nation, the 1940s appeared as a golden age when everything worked better, people were united, and the United States saved the world for democracy (an exaggeration that ignored the huge contributions of America’s allies, including the British Empire, the Soviet Union, and China). In fact, during World War II the United States experienced marked class, sex and gender, and racial tensions. Groups such as gays made some social progress, but the poor, especially many African Americans, were left behind. After being welcomed into the work force, women were pressured to go home when veterans returned looking for jobs in late 1945–1946, losing many of the gains they had made during the conflict. Wartime prosperity stunted the development of a welfare state; universal medical care and social security were cast as unnecessary. Combat had been a horrific experience, leaving many casualties with major physical or emotional wounds that took years to heal. Like all major global events, World War II was complex and nuanced, and it requires careful interpretation.
Daniel J. Sargent
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of American History. Please check back later for the full article.
Foreign economic policy involves the mediation and management of economic flows across borders. Over two-and-a-half centuries, the context for U.S. foreign economic policy has undergone dramatic change. Once a fledgling republic on the periphery of the world economy, the United States has become the world’s largest economy, the arbiter of international economic order, and a predominant influence on the world economy. Throughout this transformation, the making of foreign economic policy has involved delicate tradeoffs between diverse interests—political and material, foreign and domestic, sectional and sectoral, and so on. Ideas and beliefs have also shaped U.S. foreign economic policy—from Enlightenment-era convictions in the pacifying effects of international commerce to late 20th-century convictions about the efficacy of free markets. U.S. foreign economic policy, broad in remit, expanded in scope and reach as the United States came, in the 20th century, to exercise managerial responsibility for the world economy.
The Soviet Union’s successful launch of the first artificial satellite Sputnik 1 on October 4, 1957, captured global attention and achieved the initial victory in what would soon become known as the space race. This impressive technological feat and its broader implications for Soviet missile capability rattled the confidence of the American public and challenged the credibility of U.S. leadership abroad. With the U.S.S.R.’s launch of Sputnik, and then later the first human spaceflight in 1961, U.S. policymakers feared that the public and political leaders around the world would view communism as a viable and even more dynamic alternative to capitalism, tilting the global balance of power away from the United States and towards the Soviet Union.
Reactions to Sputnik confirmed what members of the U.S. National Security Council had predicted: the image of scientific and technological superiority had very real, far-reaching geopolitical consequences. By signaling Soviet technological and military prowess, Sputnik solidified the link between space exploration and national prestige, setting a course for nationally funded space exploration for years to come. For over a decade, both the Soviet Union and the United States funneled significant financial and personnel resources into achieving impressive firsts in space, as part of a larger effort to win alliances in the Cold War contest for global influence.
From a U.S. vantage point, the space race culminated in the first Moon landing in July 1969. In 1961, President John F. Kennedy proposed Project Apollo, a lunar exploration program, as a tactic for restoring U.S. prestige in the wake of Soviet cosmonaut Yuri Gagarin’s spaceflight and the failure of the Bay of Pigs invasion. To achieve Kennedy’s goal of sending a man to the Moon and returning him safely back to Earth by the end of the decade, the United States mobilized a workforce in the hundreds of thousands. Project Apollo became the most expensive government funded civilian engineering program in U.S. history, at one point stretching to more than 4 percent of the federal budget. The United States’ substantial investment in winning the space race reveals the significant status of soft power in American foreign policy strategy during the Cold War.
Military assistance programs have been crucial instruments of American foreign policy since World War II, valued by policymakers for combating internal subversion in the “free world,” deterring aggression, and protecting overseas interests. The 1958 Draper Committee, consisting of eight members of the Senate Foreign Relations Committee, concluded that economic and military assistance were interchangeable; as the committee put it, without internal security and the “feeling of confidence engendered by adequate military forces, there is little hope for economic progress.” Less explicitly, military assistance was also designed to uphold the U.S. global system of military bases established after World War II, ensure access to raw materials, and help recruit intelligence assets while keeping a light American footprint. Police and military aid was often invited and welcomed by government elites in so-called free world nations for enhancing domestic security or enabling the swift repression of political opponents. It sometimes coincided with an influx of economic aid, as under the Marshall Plan and Alliance for Progress. In cases like Vietnam, the programs contributed to stark human rights abuses owing to political circumstances and prioritizing national security over civil liberties.
Robert David Johnson
The birth of the United States through a successful colonial revolution created a unique nation-state in which anti-imperialist sentiment existed from the nation’s founding. Three broad points are essential in understanding the relationship between anti-imperialism and U.S. foreign relations. First, the United States obviously has had more than its share of imperialist ventures over the course of its history. Perhaps the better way to address the matter is to remark on—at least in comparison to other major powers—how intense a commitment to anti-imperialism has remained among some quarters of the American public and government. Second, the strength of anti-imperialist sentiment has varied widely and often has depended upon domestic developments, such as the emergence of abolitionism before the Civil War or the changing nature of the Progressive movement following World War I. Third, anti-imperialist policy alternatives have enjoyed considerably more support in Congress than in the executive branch.
Although Americans have adopted and continue to adopt children from all over the world, Asian minors have immigrated and joined American families in the greatest numbers and most shaped our collective understanding of the process and experiences of adoption. The movement and integration of infants and youths from Japan, the Philippines, India, Vietnam, Korea, and China (the most common sending nations in the region) since the 1940s have not only altered the composition and conception of the American family but also reflected and reinforced the complexities of U.S. relations with and actions in Asia. In tracing the history of Asian international adoption, we can undercover shifting ideas of race and national belonging. The subject enriches the fields of Asian American and immigration history.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of American History. Please check back later for the full article.
Counterinsurgency (known as COIN) is a theory of war that seeks to describe a proven set of techniques that a government may use to defeat a violent, internal, organized challenge to its authority and legitimacy. The term is sometimes also used to describe the set of activities itself (e.g., “conducting counterinsurgency”). The term originates from the middle of the 20th century, when it emerged from officials in U.S. President John F. Kennedy’s administration, as well as from British and French thinkers and practitioners with whom these officials were consulting. The Kennedy Administration and its allies were grappling with how to deal with what they viewed as Soviet attempts to destabilize post-colonial governments in the Third World and bring those nascent countries into the Soviet orbit. Encouraged by British and French experience in post-colonial rebellions and prior experience of imperial policing, the Kennedy administration hoped to apply their lessons learned to Cold War problems, most notably the growing challenges in Vietnam.
Rebellions, “irregular warfare,” “guerrilla warfare,” or “small wars,” or for that matter, thinking about means to put them down, go back to the beginnings of organized conflict itself. But 20th-century thinkers were informed most especially by British and French theorists of the 19th and early 20th centuries, such as British Colonel Charles E. Calwell and future Marshal of France, Hubert Lyautey. The most significant influence came from veterans, such as Sir Robert Thompson, of Britain’s “Emergency” in Malaya, from 1948–1960, and from David Galula, veteran of France’s conflict in Algeria from 1954–1960. Though these theorists differ on a number of points and on emphasis, the intellectual paternity is clear.
At its heart, the premise of counterinsurgency theory is that rebellions can only be eliminated by gaining the support of the population. Because rebels can hide amongst the people, influence them, and convince “fence sitters” to join in an insurgency, the government can only succeed when the majority of the population rejects the rebels and their message, refuses to offer them assistance, and ultimately turns them over to the authorities. Counterinsurgency theorists often invoke an image from a work by Chinese leader Mao Zedong, On Guerrilla Warfare, in which he described the people as water and guerrilla fighters as fish swimming in it.
Theorists argued for decades (indeed, the argument goes on) about whether America’s war in Vietnam failed because the nation was unable or unwilling to fully implement proper counterinsurgency practices. When the U.S.-led wars in Iraq and Afghanistan in the 21st century began to falter, counterinsurgency and its proponents were once again center stage. Indeed, many maintain that, in 2007, the United States began to implement COIN, and that this turned the tide. But this argument remains in dispute, as do the theoretical and historical foundations of COIN more broadly.