You are looking at 1-10 of 257 articles
Ronald Reagan’s foreign policy legacy remains hotly contested, and as new archival sources come to light, those debates are more likely to intensify than to recede into the background. In dealings with the Soviet Union, the Reagan administration set the superpowers on a course for the (largely) peaceful end of the Cold War. Reagan began his outreach to Soviet leaders almost immediately after taking office and enjoyed some success, even if the dominant theme of the period remains fears of Reagan as a “button-pusher” in the public’s perception. Mikhail Gorbachev’s election to the post of General Secretary proved the turning point. Reagan, now confident in US strength, and Gorbachev, keen to reduce the financial burden of the arms race, ushered in a new, cooperative phase of the Cold War. Elsewhere, in particular Latin America, the administration’s focus on fighting communism led it to support human rights–abusing regimes at the same time as it lambasted Moscow’s transgressions in that regard. But even so, over the course of the 1980s, the United States began pushing for democratization around the world, even where Reagan and his advisors had initially resisted it, fearing a communist takeover. In part, this was a result of public pressure, but the White House recognized and came to support the rising tide of democratization. When Reagan left office, a great many countries that had been authoritarian were no longer, often at least in part because of US policy. US–Soviet relations had improved to such an extent that Reagan’s successor, Vice President George H. W. Bush, worried that they had gone too far in working with Gorbachev and been hoodwinked.
Jeffrey F. Taffet
In the first half of the 20th century, and more actively in the post–World War II period, the United States government used economic aid programs to advance its foreign policy interests. US policymakers generally believed that support for economic development in poorer countries would help create global stability, which would limit military threats and strengthen the global capitalist system. Aid was offered on a country-by-country basis to guide political development; its implementation reflected views about how humanity had advanced in richer countries and how it could and should similarly advance in poorer regions. Humanitarianism did play a role in driving US aid spending, but it was consistently secondary to political considerations. Overall, while funding varied over time, amounts spent were always substantial. Between 1946 and 2015, the United States offered almost $757 billion in economic assistance to countries around the world—$1.6 trillion in inflation-adjusted 2015 dollars. Assessing the impact of this spending is difficult; there has long been disagreement among scholars and politicians about how much economic growth, if any, resulted from aid spending and similar disputes about its utility in advancing US interests. Nevertheless, for most political leaders, even without solid evidence of successes, aid often seemed to be the best option for constructively engaging poorer countries and trying to create the kind of world in which the United States could be secure and prosperous.
Ansley T. Erickson
“Urban infrastructure” calls to mind railways, highways, and sewer systems. Yet the school buildings—red brick, limestone, or concrete, low-slung, turreted, or glass-fronted—that hold and seek to shape the city’s children are ubiquitous forms of infrastructure as well. Schools occupy one of the largest line items in a municipal budget, and as many as a fifth of a city’s residents spend the majority of their waking hours in school classrooms, hallways, and gymnasiums. In the 19th and 20th centuries urban educational infrastructure grew, supported by developing consensus for publicly funded and publicly governed schools (if rarely fully accessible to all members of the public). Even before state commitment to other forms of social welfare, from pensions to public health, and infrastructure, from transit to fire, schooling was a government function.
This commitment to public education ultimately was national, but schools in cities had their own story. Schooling in the United States is chiefly a local affair: Constitutional responsibility for education lies with the states; power is then further decentralized as states entrust decisions about school function and funding to school districts. School districts can be as small as a single town or a part of a city. Such localism is one reason that it is possible to speak about schools in U.S. cities as having a particular history, determined as much by the specificities of urban life as by national questions of citizenship, economy, religion, and culture.
While city schools have been distinct, they have also been nationally influential. Urban scale both allowed for and demanded the most extensive educational system-building. Urban growth and diversity galvanized innovation, via exploration in teaching methods, curriculum, and understanding of children and communities. And it generated intense conflict. Throughout U.S. history, urban residents from myriad social, political, religious, and economic positions have struggled to define how schools would operate, for whom, and who would decide.
During the 19th and 20th centuries, U.S. residents struggled over the purposes, funding, and governance of schools in cities shaped by capitalism, nativism, and white supremacy. They built a commitment to schooling as a public function of their cities, with many compromises and exclusions. In the 21st century, old struggles re-emerged in new form, perhaps raising the question of whether schools will continue as public, urban infrastructure.
Robert R. Gioielli
By the late 19th century, American cities like Chicago and New York were marvels of the industrializing world. The shock urbanization of the previous quarter century, however, brought on a host of environmental problems. Skies were acrid with coal smoke, and streams ran fetid with raw sewage. Disease outbreaks were as common as parks and green space was rare. In response to these hazards, particular groups of urban residents responded to them with a series of activist movements to reform public and private policies and practices, from the 1890s until the end of the 20th century. Those environmental burdens were never felt equally, with the working class, poor, immigrants, and minorities bearing an overwhelming share of the city’s toxic load. By the 1930s, many of the Progressive era reform efforts were finally bearing fruit. Air pollution was regulated, access to clean water improved, and even America’s smallest cities built robust networks of urban parks. But despite this invigoration of the public sphere, after World War II, for many the solution to the challenges of a dense modern city was a private choice: suburbanization. Rather than continue to work to reform and reimagine the city, they chose to leave it, retreating to the verdant (and pollution free) greenfields at the city’s edge. These moves, encouraged and subsidized by local and federal policies, provided healthier environments for the mostly white, middle-class suburbanites, but created a new set of environmental problems for the poor, working-class, and minority residents they left behind. Drained of resources and capital, cities struggled to maintain aging infrastructure and regulate remaining industry and then exacerbated problems with destructive urban renewal and highway construction projects. These remaining urban residents responded with a dynamic series of activist movements that emerged out of the social and community activism of the 1960s and presaged the contemporary environmental justice movement.
Since the introduction of “Fordism” in the early 1910s, which emphasized technological improvements and maximizing productive efficiency, US autoworkers have struggled with repetitive, exhausting, often dangerous jobs. Yet beginning with Ford’s Five Dollar Day, introduced in 1914, auto jobs have also provided higher pay than most other wage work, attracting hundreds of thousands of people, especially to Detroit, Michigan, through the 1920s, and again from World War II until the mid-1950s. Successful unionization campaigns by the United Auto Workers (UAW) in the 1930s and early 1940s resulted in contracts that guaranteed particular wage increases, reduced the power of foremen, and created a process for resolving workplace conflicts. In the late 1940s and early 1950s UAW president Walter Reuther negotiated generous medical benefits and pensions for autoworkers. The volatility of the auto industry, however, often brought layoffs that undermined economic security. By the 1950s overproduction and automation contributed heavily to instability for autoworkers. The UAW officially supported racial and gender equality, but realities in auto plants and the makeup of union leadership often belied those principles. Beginning in the 1970s US autoworkers faced disruptions caused by high oil prices, foreign competition, and outsourcing to Mexico. Contract concessions at unionized plants began in the late 1970s and continued into the 2000s. By the end of the 20th century, many American autoworkers did not belong to the UAW because they were employed by foreign automakers, who built factories in the United States and successfully opposed unionization. For good reason, autoworkers who survived the industry’s turbulence and were able to retire with guaranteed pensions and medical care look back fondly on all that they gained from working in the industry under UAW contracts. Countless others left auto work permanently and often reluctantly in periodic massive layoffs and the continuous loss of jobs from automation.
Betsy A. Beasley
American cities have been transnational in nature since the first urban spaces emerged during the colonial period. Yet the specific shape of the relationship between American cities and the rest of the world has changed dramatically in the intervening years. In the mid-20th century, the increasing integration of the global economy within the American economy began to reshape US cities. In the Northeast and Midwest, the once robust manufacturing centers and factories that had sustained their residents—and their tax bases—left, first for the South and West, and then for cities and towns outside the United States, as capital grew more mobile and businesses sought lower wages and tax incentives elsewhere. That same global capital, combined with federal subsidies, created boomtowns in the once-rural South and West. Nationwide, city boosters began to pursue alternatives to heavy industry, once understood to be the undisputed guarantor of a healthy urban economy. Increasingly, US cities organized themselves around the service economy, both in high-end, white-collar sectors like finance, consulting, and education, and in low-end pink-collar and no-collar sectors like food service, hospitality, and health care. A new legal infrastructure related to immigration made US cities more racially, ethnically, and linguistically diverse than ever before.
At the same time, some US cities were agents of economic globalization themselves. Dubbed “global cities” by celebrants and critics of the new economy alike, these cities achieved power and prestige in the late 20th century not only because they had survived the ruptures of globalization but because they helped to determine its shape. By the end of the 20th century, cities that are not routinely listed among the “global city” elite jockeyed to claim “world-class” status, investing in high-end art, entertainment, technology, education, and health care amenities to attract and retain the high-income white-collar workers understood to be the last hope for cities hollowed out by deindustrialization and global competition. Today, the extreme differences between “global cities” and the rest of US cities, and the extreme socioeconomic stratification seen in cities of all stripes, is a key concern of urbanists.
Since the early 1800s railroads have served as a critical element of the transportation infrastructure in the United States and have generated profound changes in technology, finance, business-government relations, and labor policy. By the 1850s railroads, at least in the northern states, had evolved into the nation’s first big businesses, replete with managerial hierarchies that in many respects resembled the structure of the US Army. After the Civil War ended, the railroad network grew rapidly, with lines extending into the Midwest and ultimately, with the completion of the first transcontinental railroad in 1869, to the Pacific Coast. The last third of the 19th century was characterized by increased militancy among railroad workers, as well as by the growing danger that railroading posed to employees and passengers. Intense competition among railroad companies led to rate wars and discriminatory pricing. The presence of rebates and long-haul/short-haul price differentials led to the federal regulation of the railroads in 1887. The Progressive Era generated additional regulation that reduced profitability and discouraged additional investment in the railroads. As a result, the carriers were often unprepared for the traffic demands associated with World War I, leading to government operation of the railroads between 1917 and 1920. Highway competition during the 1920s and the economic crises of the 1930s provided further challenges for the railroads. The nation’s railroads performed well during World War II but declined steadily in the years that followed. High labor costs, excessive regulatory oversight, and the loss of freight and passenger traffic to cars, trucks, and airplanes ensured that by the 1960s many once-profitable companies were on the verge of bankruptcy. A wave of mergers failed to halt the downward slide. The bankruptcy of Penn Central in 1970 increased public awareness of the dire circumstances and led to calls for regulatory reform. The 1980 Staggers Act abolished most of the restrictions on operations and pricing, thus revitalizing the railroads.
Changing foodways, the consumption and production of food, access to food, and debates over food shaped the nature of American cities in the 20th century. As American cities transformed from centers of industrialization at the start of the century to post-industrial societies at the end of the 20th century, food cultures in urban America shifted in response to the ever-changing urban environment. Cities remained centers of food culture, diversity, and food reform despite these shifts.
Growing populations and waves of immigration changed the nature of food cultures throughout the United States in the 20th century. These changes were significant, all contributing to an evolving sense of American food culture. For urban denizens, however, food choice and availability were dictated and shaped by a variety of powerful social factors, including class, race, ethnicity, gender, and laboring status. While cities possessed an abundance of food in a variety of locations to consume food, fresh food often remained difficult for the urban poor to obtain as the 20th century ended.
As markets expanded from 1900 to 1950, regional geography became a less important factor in determining what types of foods were available. In the second half of the 20th century, even global geography became less important to food choices. Citrus fruit from the West Coast was readily available in northeastern markets near the start of the century, and off-season fruits and vegetables from South America filled shelves in grocery stores by the end of the 20th century. Urban Americans became further disconnected from their food sources, but this dislocation spurred counter-movements that embraced ideas of local, seasonal foods and a rethinking of the city’s relationship with its food sources.
The foreign relations of the Jacksonian age reflected Andrew Jackson’s own sense of the American “nation” as long victimized by non-white enemies and weak politicians. His goal as president from 1829 to 1837 was to restore white Americans’ “sovereignty,” to empower them against other nations both within and beyond US territory. Three priorities emerged from this conviction.
First, Jackson was determined to deport the roughly 50,000 Creeks, Cherokees, Choctaws, Chickasaws, and Seminoles living in southern states and territories. He saw them as hostile nations who threatened American safety and checked American prosperity. Far from a domestic issue, Indian Removal was an imperial project that set the stage for later expansion over continental and oceanic frontiers.
Second and somewhat paradoxically, Jackson sought better relations with Great Britain. These were necessary because the British Empire was both the main threat to US expansion and the biggest market for slave-grown exports from former Indian lands. Anglo-American détente changed investment patterns and economic development throughout the Western Hemisphere, encouraging American leaders to appease London even when patriotic passions argued otherwise.
Third, Jackson wanted to open markets and secure property rights around the globe, by treaty if possible but by force when necessary. He called for a larger navy, pressed countries from France to Mexico for outstanding debts, and embraced retaliatory strikes on “savages” and “pirates” as far away as Sumatra. Indeed, the Jacksonian age brought a new American presence in the Pacific. By the mid-1840s the United States was the dominant power in the Hawaiian Islands and a growing force in China. The Mexican War that followed made the Union a two-ocean colossus—and pushed its regional tensions to the breaking point.
While colonial New Englanders gathered around town commons, settlers in the Southern colonials sprawled out on farms and plantations. The distinctions had more to do with the varying objectives of these colonial settlements and the geography of deep-flowing rivers in the South than with any philosophical predilections. The Southern colonies did indeed sprout towns, but these were places of planters’ residences, planters’ enslaved Africans, and the plantation economy, an axis that would persist through the antebellum period. Still, the aspirations of urban Southerners differed little from their Northern counterparts in the decades before the Civil War. The institution of slavery and an economy emphasizing commercial agriculture hewed the countryside close to the urban South, not only in economics, but also in politics. The devastation of the Civil War rendered the ties between city and country in the South even tighter. The South participated in the industrial revolution primarily to the extent of processing crops. Factories were often located in small towns and did not typically contribute to urbanization. City boosters aggressively sought and subsidized industrial development, but a poorly educated labor force and the scarcity of capital restricted economic development. Southern cities were more successful in legalizing the South’s culture of white supremacy through legal segregation and the memorialization of the Confederacy. But the dislocations triggered by World War II and the billions of federal dollars poured into Southern urban infrastructure and industries generated hope among civic leaders for a postwar boom. The civil rights movement after 1950, with many of its most dramatic moments focused on the South’s cities, loosened the connection between Southern city and region as cities chose development rather than the stagnation that was certain to occur without a moderation of race relations. The predicted economic bonanza occurred. Young people left the rural areas and small towns of the South for the larger cities to find work in the postindustrial economy and, for the first time in over a century, the urban South received migrants in appreciable numbers from other parts of the country and the world. The lingering impact of spatial distinctions and historical differences (particularly those related to the Civil War) linger in Southern cities, but exceptionalism is a fading characteristic.