Since the social sciences began to emerge as scholarly disciplines in the last quarter of the 19th century, they have frequently offered authoritative intellectual frameworks that have justified, and even shaped, a variety of U.S. foreign policy efforts. They played an important role in U.S. imperial expansion in the late 19th and early 20th centuries. Scholars devised racialized theories of social evolution that legitimated the confinement and assimilation of Native Americans and endorsed civilizing schemes in the Philippines, Cuba, and elsewhere. As attention shifted to Europe during and after World War I, social scientists working at the behest of Woodrow Wilson attempted to engineer a “scientific peace” at Versailles. The desire to render global politics the domain of objective, neutral experts intensified during World War II and the Cold War. After 1945, the social sciences became increasingly central players in foreign affairs, offering intellectual frameworks—like modernization theory—and bureaucratic tools—like systems analysis—that shaped U.S. interventions in developing nations, guided nuclear strategy, and justified the increasing use of the U.S. military around the world.
Throughout these eras, social scientists often reinforced American exceptionalism—the notion that the United States stands at the pinnacle of social and political development, and as such has a duty to spread liberty and democracy around the globe. The scholarly embrace of conventional political values was not the result of state coercion or financial co-optation; by and large social scientists and policymakers shared common American values. But other social scientists used their knowledge and intellectual authority to critique American foreign policy. The history of the relationship between social science and foreign relations offers important insights into the changing politics and ethics of expertise in American public policy.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of American History. Please check back later for the full article.
The history of technology and environmental history are both relatively young disciplines among Americanists, and during their early years they developed as distinctly different and even antithetical fields, at least in topical terms. Historians of technology initially focused on human-made and presumably “unnatural” technologies, whereas environmental historians focused on non-human and presumably “natural” environments. However, in more recent decades both disciplines have moved beyond this oppositional framing. Historians of technology increasingly came to view anthropogenic artifacts like cities, domesticated animals, and machines as extensions of the natural world rather than its antithesis. Even the British and American Industrial Revolutions constituted not a distancing of humans from nature, some scholars suggested, but rather a deepening entanglement with the material environment. At the same time, many environmental historians were moving beyond the field’s initial emphasis on the ideal of an American and often western “wilderness” to embrace a concept of the environment as including humans and productive work. Nonetheless, many environmental historians continued to emphasize the independent agency of the non-human environment of organisms and things. This insistence that all could not be reduced to human culture remained the field’s most distinctive feature. Since the turn of the millennium, the two fields have increasingly come together in a variety of synthetic approaches, including Actor Network Theory, envirotechnical analysis, and neo-materialist theory. As the influence of the cultural turn has waned, the environmental historians’ emphasis on the independent agency of the non-human has come to the fore, gaining wider influence as it is applied to the dynamic “nature” or “wildness” that some scholars argue exists within both the technological and natural environment. The foundational distinctions between the history of technology and environmental history may now be giving way to more materially rooted attempts to understand how a dynamic hybrid environment helps to create human history in all its dimensions—cultural, social, and biological.
Gary R. Edgerton
Television is an ever-evolving and multi-dimensional medium, being at once a technology, an industry, an art form, and an institutional force. In the United States, it emerged as an idea whose time had come at the end of World War II. TV eventually grew and matured into the most influential social and cultural catalyst shaping and reflecting American civilization during the second half of the 20th century. Television revolutionized the way citizens and consumers in the United States learned about and communicated with the world; it also recast and re-envisioned the way they experience themselves and others. More than just escapist entertainment, TV reveals the dynamism and diversity of everyday life in the United States and the evolving nature of the nation’s core values. Television is moreover in a continual state of change and renewal. Its history has developed through a prehistory (before 1948) to a network era (1948–1975), a cable era (1976–1994), and finally the current digital era (1995–present). Today there are more than 650 networks in the U.S. marketplace whereby members of the typical domestic household receive 189 channels and watch more than eight hours of TV a day on average. TV in the 21st century also travels anywhere at any time, given its synergistic relationship with the Internet and a wide array of digital devices. It is now increasingly personalized, interactive, mobile, and on demand. Television is presently a convergent technology, a global industry, a viable art form, a public catalyst, and a complex and dynamic reflection of American society and culture.
Christopher P. Loss
Until World War II, American universities were widely regarded as good but not great centers of research and learning. This changed completely in the press of wartime, when the federal government pumped billions into military research, anchored by the development of the atomic bomb and radar, and into the education of returning veterans under the GI Bill of 1944. The abandonment of decentralized federal–academic relations marked the single most important development in the history of the modern American university. While it is true that the government had helped to coordinate and fund the university system prior to the war—most notably the country’s network of public land-grant colleges and universities—government involvement after the war became much more hands-on, eventually leading to direct financial support to and legislative interventions on behalf of core institutional activities, not only the public land grants but the nation’s mix of private institutions as well. However, the reliance on public subsidies and legislative and judicial interventions of one kind or another ended up being a double-edged sword: state action made possible the expansion in research and in student access that became the hallmarks of the post-1945 American university; but it also created a rising tide of expectations for continued support that has proven challenging in fiscally stringent times and in the face of ongoing political fights over the government’s proper role in supporting the sector.