Spanning countries across the globe, the antinuclear movement was the combined effort of millions of people to challenge the superpowers’ reliance on nuclear weapons during the Cold War. Encompassing an array of tactics, from radical dissent to public protest to opposition within the government, this movement succeeded in constraining the arms race and helping to make the use of nuclear weapons politically unacceptable. Antinuclear activists were critical to the establishment of arms control treaties, although they failed to achieve the abolition of nuclear weapons, as anticommunists, national security officials, and proponents of nuclear deterrence within the United States and Soviet Union actively opposed the movement. Opposition to nuclear weapons evolved in tandem with the Cold War and the arms race, leading to a rapid decline in antinuclear activism after the Cold War ended.
The Soviet Union’s successful launch of the first artificial satellite Sputnik 1 on October 4, 1957, captured global attention and achieved the initial victory in what would soon become known as the space race. This impressive technological feat and its broader implications for Soviet missile capability rattled the confidence of the American public and challenged the credibility of U.S. leadership abroad. With the U.S.S.R.’s launch of Sputnik, and then later the first human spaceflight in 1961, U.S. policymakers feared that the public and political leaders around the world would view communism as a viable and even more dynamic alternative to capitalism, tilting the global balance of power away from the United States and towards the Soviet Union.
Reactions to Sputnik confirmed what members of the U.S. National Security Council had predicted: the image of scientific and technological superiority had very real, far-reaching geopolitical consequences. By signaling Soviet technological and military prowess, Sputnik solidified the link between space exploration and national prestige, setting a course for nationally funded space exploration for years to come. For over a decade, both the Soviet Union and the United States funneled significant financial and personnel resources into achieving impressive firsts in space, as part of a larger effort to win alliances in the Cold War contest for global influence.
From a U.S. vantage point, the space race culminated in the first Moon landing in July 1969. In 1961, President John F. Kennedy proposed Project Apollo, a lunar exploration program, as a tactic for restoring U.S. prestige in the wake of Soviet cosmonaut Yuri Gagarin’s spaceflight and the failure of the Bay of Pigs invasion. To achieve Kennedy’s goal of sending a man to the Moon and returning him safely back to Earth by the end of the decade, the United States mobilized a workforce in the hundreds of thousands. Project Apollo became the most expensive government funded civilian engineering program in U.S. history, at one point stretching to more than 4 percent of the federal budget. The United States’ substantial investment in winning the space race reveals the significant status of soft power in American foreign policy strategy during the Cold War.
Energy systems have played a significant role in U.S. history; some scholars claim that they have determined a number of other developments. From the colonial period to the present, Americans have shifted from depending largely on wood and their own bodies, as well as the labor of draft animals; to harnessing water power; to building steam engines; to extracting fossil fuels—first coal and then oil; to distributing electrical power through a grid. Each shift has been accompanied by a number of other striking changes, especially in the modern period associated with fossil fuels. By the late 19th century, in part thanks to new energy systems, Americans were embracing industrialization, urbanization, consumerism, and, in a common contemporary phrase, “the annihilation of space and time.” Today, in the era of climate change, the focus tends to be on the production or supply side of energy systems, but a historical perspective reminds us to consider the consumption or demand side as well. Just as important as the striking of oil in Beaumont, Texas, in 1901, was the development of new assumptions about how much energy people needed to sustain their lives and how much work they could be expected to do. Clearly, Americans are still grappling with the question of whether their society’s heavy investment in coal- and petroleum-based energy systems has been worthwhile.
Cindy R. Lobel
Over the course of the 19th century, American cities developed from small seaports and trading posts to large metropolises. Not surprisingly, foodways and other areas of daily life changed accordingly. In 1800, the dietary habits of urban Americans were similar to those of the colonial period. Food provisioning was very local. Farmers, hunters, fishermen, and dairymen from a few miles away brought food by rowboats and ferryboats and by horse carts to centralized public markets within established cities. Dietary options were seasonal as well as regional. Few public dining options existed outside of taverns, which offered lodging as well as food. Most Americans, even in urban areas, ate their meals at home, which in many cases were attached to their workshops, countinghouses, and offices.
These patterns changed significantly over the course of the19th century, thanks largely to demographic changes and technological developments. By the turn of the 20th century, urban Americans relied on a food-supply system that was highly centralized and in the throes of industrialization. Cities developed complex restaurant sectors, and majority immigrant populations dramatically shaped and reshaped cosmopolitan food cultures. Furthermore, with growing populations, lax regulation, and corrupt political practices in many cities, issues arose periodically concerning the safety of the food supply. In sum, the roots of today’s urban food systems were laid down over the course of the 19th century.
Mass transit has been part of the urban scene in the United States since the early 19th century. Regular steam ferry service began in New York City in the early 1810s and horse-drawn omnibuses plied city streets starting in the late 1820s. Expanding networks of horse railways emerged by the mid-19th century. The electric streetcar became the dominant mass transit vehicle a half century later. During this era, mass transit had a significant impact on American urban development. Mass transit’s importance in the lives of most Americans started to decline with the growth of automobile ownership in the 1920s, except for a temporary rise in transit ridership during World War II. In the 1960s, congressional subsidies began to reinvigorate mass transit and heavy-rail systems opened in several cities, followed by light rail systems in several others in the next decades. Today concerns about environmental sustainability and urban revitalization have stimulated renewed interest in the benefits of mass transit.
Joel A. Tarr
Urban water supply and sewage disposal facilities are critical parts of the urban infrastructure. They have enabled cities and their metropolitan areas to function as centers of commerce, industry, entertainment, and human habitation. The evolution of water supply and sewage disposal systems in American cities from 1800 to 2015 is examined, with a focus on major turning points especially in regard to technological decisions, public policy, and environmental and public health issues.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of American History. Please check back later for the full article.
Nuclear arms control has existed as long as the armaments themselves. American plans to limit or eliminate these weapons of mass destruction were put forward, even as the United States and nine other countries—the Soviet Union, the United Kingdom, France, China, Israel, South Africa, India, Pakistan, and North Korea—amassed stockpiles of explosives that harnessed the energies generated by the fission or fusion of atomic nuclei. Since 1945, the United States has sought to reduce its arsenal conjointly with the Soviet Union and (after 1991) Russia. Efforts have been made to inhibit new states from acquiring nuclear weapons, discourage their military use, and perhaps even allow for their eventual abolition.
Scholars disagree as to why the United States has engaged in nuclear arms control since World War II. The history of nuclear weapons encompasses intellectual theories and cultural attitudes as much as material or strategic developments. The overarching debate is one of structure versus agency: whether the weapons’ sheer power, or the attitudes of historical powers toward them, has driven arms control. Among those who stress agency, there are two further disagreements: (a) the influence of domestic culture, protest, and politics; and (b) whether nuclear arms control is an end in itself, or merely a means to end, namely the entrenchment of American power throughout the world.
The intensity of arms control efforts tends to rise and fall with the apparent likelihood of nuclear war. Faith in the country’s nuclear monopoly encouraged Harry Truman to sabotage early efforts at control, while Dwight Eisenhower’s faith in nuclear deterrence led to a similar destination. Mounting fears of a U.S.-Soviet thermonuclear exchange, in the late 1950s, stirred protest movements and diplomatic efforts in the direction of control. The spread of nuclear weapons to new states impelled presidential administrations from John F. Kennedy to Jimmy Carter to work against the expansion of nuclear arms, culminating in the 1968 Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Richard Nixon proved the exception to these trends. Not only did he downplay proliferation, but his pursuit of the 1974 Strategic Arms Limitation Treaty was motivated by a cynical goal: improvement of America’s strategic position after the Vietnam War via détente with the Soviet Union. Rising fear of nuclear war under Ronald Reagan produced two more landmark U.S.-Soviet agreements: the 1987 Intermediate-Range Nuclear Forces Treaty (INF) and the 1991 Strategic Arms Reduction Treaty (START). Since the end of the Cold War, the attention of the United States has swung away from bilateral arms control treaties or nuclear disarmament, to the spread of nuclear weapons as the unipolar moment. The mounting prominence of regional conflicts, failed states, and non-state actors has stolen attention away from efforts to put the atomic genie back in the bottle.
Michael E. Donoghue
The United States’ construction and operation of the Panama Canal began as an idea and developed into a reality after prolonged diplomatic machinations to acquire the rights to build the waterway. Once the canal was excavated, a century-long struggle ensued to hold it in the face of Panamanian nationalism. Washington used considerable negotiation and finally gunboat diplomacy to achieve its acquisition of the Canal. The construction of the channel proved a titanic effort with large regional, global, and cultural ramifications. The importance of the Canal as a geostrategic and economic asset was magnified during the two world wars. But rising Panamanian frustration over the U.S. creation of a state-within-a-state via the Canal Zone, one with a discriminatory racial structure, fomented a local movement to wrest control of the Canal from the Americans. The explosion of the 1964 anti-American uprising drove this process forward toward the 1977 Carter-Torrijos treaties that established a blueprint for eventual U.S. retreat and transfer of the channel to Panama at the century’s end. But before that historic handover, the Noriega crisis and the 1989 U.S. invasion nearly upended the projected transition of U.S. retreat from the management and control of the Canal.
Early historians emphasized high politics, economics, and military considerations in the U.S. acquisition of the Canal. They concentrated on high-status actors, economic indices, and major political contingencies in establishing the U.S. colonial order on the isthmus. Panamanian scholars brought a legalistic and nationalist critique, stressing that Washington did not create Panama and that local voices in the historical debate have largely been ignored in the grand narrative of the Canal as a great act of progressive civilization. More recent U.S. scholarship has focused on American imperialism in Panama, on the role of race, culture, labor, and gender as major factors that shaped the U.S. presence, the structure of the Canal Zone, as well as Panamanian resistance to its occupation. The role of historical memory, of globalization, representation, and how the Canal fits into notions of U.S. empire have also figured more prominently in recent scholarly examination of this relationship. Contemporary research on the Panama Canal has been supported by numerous archives in the United States and Panama, as well as a variety of newspapers, magazines, novels, and films.
Robert G. Parkinson
According to David Ramsay, one of the first historians of the American Revolution, “in establishing American independence, the pen and press had merit equal to that of the sword.” Because of the unstable and fragile notions of unity among the thirteen American colonies, print acted as a binding agent that mitigated the chances that the colonies would not support one another when war with Britain broke out in 1775.
Two major types of print dealt with the political process of the American Revolution: pamphlets and newspapers. Pamphlets were one of the most important conveyors of ideas during the imperial crisis. Often written by elites under pseudonyms and published by booksellers, they have long been held by historians as the lifeblood of the American Revolution. There were also three dozen newspaper printers in the American mainland colonies at the start of the Revolution, each producing a four-page issue every week. These weekly papers, or one-sheet broadsides that appeared in American cities even more frequently, were the most important communication avenue to keep colonists informed of events hundreds of miles away. Because of the structure of the newspaper business in the 18th century, the stories that appeared in each paper were “exchanged” from other papers in different cities, creating a uniform effect akin to a modern news wire. The exchange system allowed for the same story to appear across North America, and it provided the Revolutionaries with a method to shore up that fragile sense of unity. It is difficult to imagine American independence—as a popular idea let alone a possible policy decision—without understanding how print worked in colonial America in the mid-18th century.
Since the social sciences began to emerge as scholarly disciplines in the last quarter of the 19th century, they have frequently offered authoritative intellectual frameworks that have justified, and even shaped, a variety of U.S. foreign policy efforts. They played an important role in U.S. imperial expansion in the late 19th and early 20th centuries. Scholars devised racialized theories of social evolution that legitimated the confinement and assimilation of Native Americans and endorsed civilizing schemes in the Philippines, Cuba, and elsewhere. As attention shifted to Europe during and after World War I, social scientists working at the behest of Woodrow Wilson attempted to engineer a “scientific peace” at Versailles. The desire to render global politics the domain of objective, neutral experts intensified during World War II and the Cold War. After 1945, the social sciences became increasingly central players in foreign affairs, offering intellectual frameworks—like modernization theory—and bureaucratic tools—like systems analysis—that shaped U.S. interventions in developing nations, guided nuclear strategy, and justified the increasing use of the U.S. military around the world.
Throughout these eras, social scientists often reinforced American exceptionalism—the notion that the United States stands at the pinnacle of social and political development, and as such has a duty to spread liberty and democracy around the globe. The scholarly embrace of conventional political values was not the result of state coercion or financial co-optation; by and large social scientists and policymakers shared common American values. But other social scientists used their knowledge and intellectual authority to critique American foreign policy. The history of the relationship between social science and foreign relations offers important insights into the changing politics and ethics of expertise in American public policy.