Since the social sciences began to emerge as scholarly disciplines in the last quarter of the 19th century, they have frequently offered authoritative intellectual frameworks that have justified, and even shaped, a variety of U.S. foreign policy efforts. They played an important role in U.S. imperial expansion in the late 19th and early 20th centuries. Scholars devised racialized theories of social evolution that legitimated the confinement and assimilation of Native Americans and endorsed civilizing schemes in the Philippines, Cuba, and elsewhere. As attention shifted to Europe during and after World War I, social scientists working at the behest of Woodrow Wilson attempted to engineer a “scientific peace” at Versailles. The desire to render global politics the domain of objective, neutral experts intensified during World War II and the Cold War. After 1945, the social sciences became increasingly central players in foreign affairs, offering intellectual frameworks—like modernization theory—and bureaucratic tools—like systems analysis—that shaped U.S. interventions in developing nations, guided nuclear strategy, and justified the increasing use of the U.S. military around the world.
Throughout these eras, social scientists often reinforced American exceptionalism—the notion that the United States stands at the pinnacle of social and political development, and as such has a duty to spread liberty and democracy around the globe. The scholarly embrace of conventional political values was not the result of state coercion or financial co-optation; by and large social scientists and policymakers shared common American values. But other social scientists used their knowledge and intellectual authority to critique American foreign policy. The history of the relationship between social science and foreign relations offers important insights into the changing politics and ethics of expertise in American public policy.
K. Tsianina Lomawaima
In 1911, a group of American Indian intellectuals organized what would become known as the Society of American Indians, or SAI. SAI members convened in annual meetings between 1911 and 1923, and for much of that period the Society’s executive offices were a hub for political advocacy, lobbying Congress and the Office of Indian Affairs (OIA), publishing a journal, offering legal assistance to Native individuals and tribes, and maintaining an impressively voluminous correspondence across the country with American Indians, “Friends of the Indian” reformers, political allies, and staunch critics. Notable Native activists, clergy, entertainers, professionals, speakers, and writers—as well as Native representatives from on- and off-reservation communities—were active in the Society. They worked tirelessly to meet daunting, unrealistic expectations, principally to deliver a unified voice of Indian “public opinion” and to pursue controversial political goals without appearing too radical, especially obtaining U.S. citizenship for Indian individuals and allowing Indian nations to access the U.S. Court of Claims. They maintained their myriad activities with scant financial resources solely through the unpaid labor of dedicated Native volunteers. By 1923, the challenges exhausted the Society’s substantial human and miniscule financial capital. The Native “soul of unity” demanded by non-white spectators and hoped for by SAI leaders could no longer hold the center, and the SAI dissolved. Their work was not in vain, but citizenship and the ability to file claims materialized in circumscribed forms. In 1924 Congress passed the Indian Citizenship Act, granting birthright citizenship to American Indians, but citizenship for Indians was deemed compatible with continued wardship status. In 1946 Congress established an Indian Claims Commission, not a court, and successful claims could only result in monetary compensation, not regained lands.
Becky Nicolaides and Andrew Wiese
Mass migration to suburban areas was a defining feature of American life after 1945. Before World War II, just 13% of Americans lived in suburbs. By 2010, however, suburbia was home to more than half of the U.S. population. The nation’s economy, politics, and society suburbanized in important ways. Suburbia shaped habits of car dependency and commuting, patterns of spending and saving, and experiences with issues as diverse as race and taxes, energy and nature, privacy and community. The owner occupied, single-family home, surrounded by a yard, and set in a neighborhood outside the urban core came to define everyday experience for most American households, and in the world of popular culture and the imagination, suburbia was the setting for the American dream. The nation’s suburbs were an equally critical economic landscape, home to vital high-tech industries, retailing, “logistics,” and office employment. In addition, American politics rested on a suburban majority, and over several decades, suburbia incubated political movements across the partisan spectrum, from grass-roots conservativism, to centrist meritocratic individualism, environmentalism, feminism, and social justice. In short, suburbia was a key setting for postwar American life.
Even as suburbia grew in magnitude and influence, it also grew more diverse, coming to reflect a much broader cross-section of America itself. This encompassing shift marked two key chronological stages in suburban history since 1945: the expansive, racialized, mass suburbanization of the postwar years (1945–1970) and an era of intensive social diversification and metropolitan complexity (since 1970). In the first period, suburbia witnessed the expansion of segregated white privilege, bolstered by government policies, exclusionary practices, and reinforced by grassroots political movements. By the second period, suburbia came to house a broader cross section of Americans, who brought with them a wide range of outlooks, lifeways, values, and politics. Suburbia became home to large numbers of immigrants, ethnic groups, African Americans, the poor, the elderly and diverse family types. In the face of stubborn exclusionism by affluent suburbs, inequality persisted across metropolitan areas and manifested anew in proliferating poorer, distressed suburbs. Reform efforts sought to alleviate metro-wide inequality and promote sustainable development, using coordinated regional approaches. In recent years, the twin discourses of suburban crisis and suburban rejuvenation captured the continued complexity of America’s suburbs.
Gary R. Edgerton
Television is an ever-evolving and multi-dimensional medium, being at once a technology, an industry, an art form, and an institutional force. In the United States, it emerged as an idea whose time had come at the end of World War II. TV eventually grew and matured into the most influential social and cultural catalyst shaping and reflecting American civilization during the second half of the 20th century. Television revolutionized the way citizens and consumers in the United States learned about and communicated with the world; it also recast and re-envisioned the way they experience themselves and others. More than just escapist entertainment, TV reveals the dynamism and diversity of everyday life in the United States and the evolving nature of the nation’s core values. Television is moreover in a continual state of change and renewal. Its history has developed through a prehistory (before 1948) to a network era (1948–1975), a cable era (1976–1994), and finally the current digital era (1995–present). Today there are more than 650 networks in the U.S. marketplace whereby members of the typical domestic household receive 189 channels and watch more than eight hours of TV a day on average. TV in the 21st century also travels anywhere at any time, given its synergistic relationship with the Internet and a wide array of digital devices. It is now increasingly personalized, interactive, mobile, and on demand. Television is presently a convergent technology, a global industry, a viable art form, a public catalyst, and a complex and dynamic reflection of American society and culture.
Ted R. Bromund
The Special Relationship is a term used to describe the close relations between the United States and the United Kingdom. It applies particularly to the governmental realms of foreign, defense, security, and intelligence policy, but it also captures a broader sense that both public and private relations between the United States and Britain are particularly deep and close. The Special Relationship is thus a term for a reality that came into being over time as the result of political leadership as well as ideas and events outside the formal arena of politics.
After the political break of the American Revolution and in spite of sporadic cooperation in the 19th century, it was not until the Great Rapprochement of the 1890s that the idea that Britain and the United States had a special kind of relationship took hold. This decade, in turn, created the basis for the Special Relationship, a term first used by Winston Churchill in 1944. Churchill did the most to build the relationship, convinced as he was that close friendship between Britain and the United States was the cornerstone of world peace and prosperity. During and after the Second World War, many others on both sides of the Atlantic came to agree with Churchill.
The post-1945 era witnessed a flowering of the relationship, which was cemented—not without many controversies and crises—by the emerging Cold War against the Soviet Union. After the end of the Cold War in 1989, the relationship remained close, though it was severely tested by further security crises, Britain’s declining defense spending, the evolving implications of Britain’s membership in the European Union, the relative decline of Europe, and an increasing U.S. interest in Asia. Yet on many public and private levels, relations between the United States and Britain continue to be particularly deep, and thus the Special Relationship endures.
For almost a century and a half, successive American governments adopted a general policy of neutrality on the world stage, eschewing involvement in European conflicts and, after the Quasi War with France, alliances with European powers. Neutrality, enshrined as a core principle of American foreign relations by the outgoing President George Washington in 1796, remained such for more than a century.
Finally, in the 20th century, the United States emerged as a world power and a belligerent in the two world wars and the Cold War. This article explores the modern conflict between traditional American attitudes toward neutrality and the global agenda embraced by successive U.S. governments, beginning with entry in the First World War. With the United States immersed in these titanic struggles, the traditional U.S. support for neutrality eroded considerably. During the First World War, the United States showed some sympathy for the predicaments of the remaining neutral powers. In the Second World War it applied considerable pressure to those states still trading with Germany. During the Cold War, the United States was sometimes impatient with the choices of states to remain uncommitted in the global struggle, while at times it showed understanding for neutrality and pursued constructive relations with neutral states. The wide varieties of neutrality in each of these conflicts complicated the choices of U.S. policy makers. Americans remained torn between memory of their own long history of neutrality and a capacity to understand its potential value, on one hand, and a predilection to approach conflicts as moral struggles, on the other.
Michael R. Anderson
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of American History. Please check back later for the full article.
Although the term “Asia-Pacific” was not coined until World War II and the geographic parameters are admittedly imprecise, the regional designation nevertheless has gained popularity in recent decades among policymakers, businesspeople, and non-governmental organizations. Asia-Pacific refers to the regions bordering the western Pacific Ocean: East Asia, Southeast Asia, and Oceania. It excludes some countries that are considered part of the larger Pacific Rim: Russia, Canada, Mexico, and the western nations of Central and South America. American interest in the Asia-Pacific over the past two centuries has been marked by strong and often contradictory impulses. On the one hand, the western Pacific has served as a fertile ground for Christian missionaries, an alluring destination for American commercial enterprises, and a critical launch pad for U.S. global power projection. Yet on the other hand, leading countries in the Asia-Pacific region frequently have challenged U.S. economic and military interests, and the assertion of “Asian values” in recent years has undermined efforts to expand Western political and cultural norms. The United States’ professed “pivot to Asia” has set the stage for the latest chapter in a centuries-long relationship, one more than any other that will determine the geopolitical fault lines of the 21st century.
Christopher P. Loss
Until World War II, American universities were widely regarded as good but not great centers of research and learning. This changed completely in the press of wartime, when the federal government pumped billions into military research, anchored by the development of the atomic bomb and radar, and into the education of returning veterans under the GI Bill of 1944. The abandonment of decentralized federal–academic relations marked the single most important development in the history of the modern American university. While it is true that the government had helped to coordinate and fund the university system prior to the war—most notably the country’s network of public land-grant colleges and universities—government involvement after the war became much more hands-on, eventually leading to direct financial support to and legislative interventions on behalf of core institutional activities, not only the public land grants but the nation’s mix of private institutions as well. However, the reliance on public subsidies and legislative and judicial interventions of one kind or another ended up being a double-edged sword: state action made possible the expansion in research and in student access that became the hallmarks of the post-1945 American university; but it also created a rising tide of expectations for continued support that has proven challenging in fiscally stringent times and in the face of ongoing political fights over the government’s proper role in supporting the sector.
Urban politics provides a means to understand the major political and economic trends and transformations of the last seventy years in American cities. The growth of the federal government; the emergence of new powerful identity- and neighborhood-based social movements; and large-scale economic restructuring have characterized American cities since 1945. The postwar era witnessed the expansion of scope and scale of the federal government, which had a direct impact on urban space and governance, particularly as urban renewal fundamentally reshaped the urban landscape and power configurations. Urban renewal and liberal governance, nevertheless, spawned new and often violent tensions and powerful opposition movements among old and new residents. These movements engendered a generation of city politicians who assumed power in the 1970s. Yet all of these figures were forced to grapple with the larger forces of capital flight, privatization, the war on drugs, mass incarceration, immigration, and gentrification. This confluence of factors meant that as many American cities and their political representatives became demographically more diverse by the 1980s and 1990s, they also became increasingly separated by neighborhood boundaries and divided by the forces of class and economic inequality.
Relations between the United States and Argentina can be best described as a cautious embrace punctuated by moments of intense frustration. Although never the center of U.S.–Latin American relations, Argentina has attempted to create a position of influence in the region. As a result, the United States has worked with Argentina and other nations of the Southern Cone—the region of South America that comprises Uruguay, Paraguay, Argentina, Chile, and southern Brazil—on matters of trade and economic development as well as hemispheric security and leadership. While Argentina has attempted to assert its position as one of Latin America’s most developed nations and therefore a regional leader, the equal partnership sought from the United States never materialized for the Southern Cone nation. Instead, competition for markets and U.S. interventionist and unilateral tendencies kept Argentina from attaining the influence and wealth it so desired. At the same time, the United States saw Argentina as an unreliable ally too sensitive to the pull of its volatile domestic politics. The two nations enjoyed moments of cooperation in World War I, the Cold War, and the 1990s, when Argentine leaders could balance this particular external partnership with internal demands. Yet at these times Argentine leaders found themselves walking a fine line as detractors back home saw cooperation with the United States as a violation of their nation’s sovereignty and autonomy. There has always been potential for a productive partnership, but each side’s intransigence and unique concerns limited this relationship’s accomplishments and led to a historical imbalance of power.