Era 14 Contemporary United States: Domestic Policies (1970–Today)

www.njcss.org

Engaging High School Students in Global Civic Education Lessons in U.S. History

The relationship between the individual and the state is present in every country, society, and civilization. Relevant questions about individual liberty, civic engagement, government authority, equality and justice, and protection are important for every demographic group in the population.  In your teaching of World History, consider the examples and questions provided below that should be familiar to students in the history of the United States with application to the experiences of others around the world.

These civic activities are designed to present civics in a global context as civic education happens in every country.  The design is flexible regarding using one of the activities, allowing students to explore multiple activities in groups, and as a lesson for a substitute teacher. The lessons are free, although a donation to the New Jersey Council for the Social Studies is greatly appreciated. www.njcss.org

During the last quarter of the 20th century and the first quarter of the 21st century, the United States and the world experienced rapid changes in the environment, technology, human rights, and world governments. During this period there were three economic crises, a global pandemic, migrations of populations, and a global pandemic. There were also opportunities in health care, biotechnology, and sustainable sources of energy. The debate over individual freedoms, human rights, guns, voting, affordability, and poverty were present in many countries, including the United States.

As the United States became more diverse and inclusive after the 1965 Immigration and Nationality Act, our population became divided on the assimilation of immigrants and restricting the number entering the United States.  The civil liberties in our constitution become challenged as people wanted “law and order.” One civil liberty that has weakened over time is the “Miranda Warning” from the U.S. Supreme Court decision in Miranda v. Arizona, (1966).

“Ernesto Miranda was convicted on charges of kidnapping and rape. He was identified in a police lineup and questioned by the police. He confessed and then signed a written statement without first having been told that he had the right to have a lawyer present to advise him (under the Sixth Amendment) or that he had a right to remain silent (under the Fifth Amendment). Miranda’s confession was later used against him at his trial and a conviction was obtained. When Miranda’s case came before the United States Supreme Court and the Court ruled that, “detained criminal suspects, prior to police questioning, must be informed of their constitutional right against self-incrimination and the right to an attorney.” The court explained, “a defendant’s statement to authorities are inadmissible in court unless the defendant has been informed of their right to have an attorney present during questioning and an understanding that anything, they say will be held against them.” The court reasoned that these procedural safeguards were required under the United States Constitution.”

Miranda rights typically do not apply to individuals stopped for traffic violations until the individual is taken into custody. There are four rights that are usually read to someone about to be interrogated or detained against their will.

  • The Right to Remain Silent: You are not obligated to answer any questions from law enforcement.
  • Anything You Say Can Be Used Against You: Statements you make during questioning can be presented as evidence in court.
  • The Right to an Attorney: You have the right to consult with a lawyer before answering questions and to have one present during interrogation.
  • If You Can’t Afford a Lawyer, One Will Be Provided: This guarantees access to legal counsel, regardless of your financial situation.

This basic civil liberty has weakened over time giving more power to the police (government).  This power has resulted in forced confessions, false statements by the police, accusations of resisting arrest by not providing the police with basic information, and delaying the reading of the Miranda Warning.  In Vega v. Tekoh (2022), the U.S. Supreme Court held, Miranda warnings are not rights but rather judicially crafted rules, significantly weakening this civil liberty as a constitutional protection. 

Unlike in the United states, in Japan, individuals are presumed guilty. There is no right to remain silent or the offer of a lawyer. Many people, including juveniles, may be detained for months as the authorities try to obtain a signed confession. Most people are unaware of these practices because of Japan’s reputation as a democracy and their international human rights record.

“Tomo A., arrested in August 2017 for allegedly killing his six-week-old child by shaking. He spent nine months in detention awaiting trial, and during that time, prosecutors told him that either he or his wife must have killed their baby and his wife would be prosecuted if he did not confess. He was acquitted in November 2018.”

Bail is not an option during the pre-indictment period and it is frequently denied after a person is indicted of a crime. Bail, when granted, is limited to a maximum of 10 days with an appeal for an additional extension of up to 23 days. Individuals who are released, are watched closely and new arrests are fairly common.

“Yusuke Doi, a musician, was held for 10 months without bail after being arrested on suspicion of stealing 10,000 yen (US$90) from a convenience store. His application for bail was denied nine times. Even though he was ultimately acquitted, a contract that Doi had signed with a record company prior to his arrest to produce an album was cancelled, resulting in financial loss and setting back his career.”

Police often use intimidation, threats, verbal abuse, and sleep deprivation to get someone to confess or provide information. The Japanese Constitution states that “no person shall be compelled to testify against himself” and a “confession made under compulsion, torture or threat, or after prolonged arrest or detention shall not be admitted in evidence.”

The accused are not allowed to meet, call, or even exchange letters with anyone else, including family members. Many individuals interviewed by Human Rights Watch cited this ban on communications as a cause of significant anxiety while in detention.

In 2015, Kayo N. was arrested for conspiracy to commit fraud. Kayo N. said that she worked as a secretary at a company from February 2008 to October 2011. In December 2008, the company president asked her to become the interim president of another company owned by her boss while a replacement was sought. She said that she was unaware that the company only existed on paper and that her boss had previously been blacklisted from obtaining loans. After her arrest and detention, the judge issued a contact prohibition order on the grounds that she might conspire to destroy evidence. Kayo N. was not allowed to see anyone but her lawyer for one year, could not receive letters, and could only write to her two adult sons with the permission of the presiding judge.

She said: “After I was moved to the Tokyo detention center, I was kept in the “bird cage” [solitary confinement] from April 2016 to July 2017. It was so cold that it felt like sleeping in a field, I had frostbite. I spoke only twice during the day to call out my number. It felt like I was losing my voice. The contact prohibition order was removed one year after my arrest. However, I remained in solitary confinement.

Kayo N. said she did not know why she had been put in solitary confinement. She says that police also interrogated her sons to compel her to confess. The long trial process also exacerbated financial hardships. She was sentenced to three years’ imprisonment.”

Japan has a 99.8 percent conviction rate in cases that go to trial, according to 2021 Supreme Court statistics.

Questions:

  1. Should the rights of an individual receive greater or lesser weight than the police powers of the state when someone is accused of a criminal offense?
  2. How can the Miranda rights be protected and preserved in the United States or should they be interpreted and implemented at the local or state level?
  3. How is Japan able to continue with its preference for police powers when international human rights organizations have called for reform?
  4. In the context of detainment by federal immigration officers in the United States in 2026, do U.S. citizens (and undocumented immigrants) have any protected rights or an appeal process when detained without cause?

How to Interact with Police (Video, 20 minutes)

The Erosion of Miranda (American Bar Association)

Vega v. Tekoh(2022)

Japan’s “Hostage Justice” System (Human Rights Watch)

Caught Between Hope and Despair: An Analysis of the Japanese Criminal Justice System (University of Denver)

Activity #2: Gun laws in United States and New Zealand

Since 1966, 1,728 people have been killed and 2,697 injured in mass public shootings in the United States. The definition of a mass shooting is three or more individuals being killed. (Investigative Assistance for Violent Crimes Act, 2012) The U.S. Federal Bureau of Investigation does not define a mass shooting with a specific number of deaths. Technology, especially the production of ‘ghost guns’ with   3-D printers has contributed to gun violence.  Handguns are used in 73% of mass shootings and rifles, shotguns, assault weapons, and multiple weapons are also used.

Before 2008, the District of Columbia prohibited the possession of usable handguns in the home. This was challenged by Dick Heller, a special police officer in the District of Columbia who was licensed to carry a firearm while on duty. He applied to the chief of police for permission to have a firearm in his home for one year. The chief of police had the authority to grant a temporary license but denied the license to Dick Heller, who appealed the decision in a federal court.

The Second Amendment states that “A well-regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.” Several state constitutions give citizens the right to bear arms in defense of themselves and outside an organized state militia. Individuals used weapons against Native Americans and enslaved individuals.

The United States has the highest number of registered guns per person in the world. Estimates range from 300 million (one per person) to over 400 million. Even with effective legislation on restricting guns, these weapons would still be availabile. Approximately 10 million firearms are produced annually.

A discussion in your classroom might focus on the debate within the state legislatures during the ratification of the Constitution regarding the use or arms for a state militia and the right of individuals to carry weapons for hunting.  The Bruen decision (2022) requires that gun laws today need to be consistent with the historical understanding from when the states ratified the Bill of Rights.

1. Is this requirement possible and relevant? In 1789 people hunted for their food and today people go shopping in supermarkets.   In 1789, the federal government relied on states to support an army and today we have a highly trained military.  

2. Has the technology on producing guns change the right to keep and bear arms? Assault weapons and the production of ghost guns did not exist 200 years ago.

3. Should the need to restrict the right to keep and bear arms be consider as a result that the population of the United States is now over 300 million?  A significant portion of the population lives in urban areas with high-rise apartment complexes. Should the history of previous centuries alongside the mass shooting events of the 21st century,  be careful considered in the debate to restrict gun ownership?

March 15, 2019 marks one of the darkest days in New Zealand when 51 people were killed and 50 others wounded when a gunman fired at two mosques in the city of ChristChurch.  This was the worst peacetime mass shooting in New Zealand’s history. Within one month New Zealand’s Parliament voted 119-1 on a nationwide ban on semi-automatic weapons and assault rifles.  The gun reform law also set up a commission to establish limits on social media, accessibility to weapons, and education.  In addition to the sweeping reform of gun laws, a special commission is being set up to explore broader issues around accessibility of weapons and the role of social media.

Australia also introduced a ban on automatic and semi-automatic weapons and restrictive licensing laws after a mass shooting in 1996.  Some states in the United States have enacted strict laws restricting ghost guns (New Jersey, Oregon) and automatic weapons (New Jersey). However, the debate has been contentious in these states and the almost unanimous vote in New Zealand is not likely in the United States.

Questions:

  1. How significant would restrictive legislation in the United States be in curtailing mass shootings and/or murders?
  2. In addition to the influence of the gun lobby in the United States, what is the next most powerful influence against gun reforms in the United States?
  3. Is it possible for states to have their own restrictive gun laws with the Bruen decision by the U.S. Supreme Court?
  4. Why do you think restrictive gun laws were enacted in New Zealand and Australia? (absence of a constitutional protection, common national identity, religious beliefs, culture, leadership by the government, public outrage, etc.)
  5. As a class, do you think gun reform laws in the United States are possible in the next 5-10 years?

District of Columbia v. Heller(2008)

Why Heller is Such Bad History(Duke Center for Firearms Law)

15 Years After Heller: Bruen is Unleashing Chaos, But There is Hope for Regulations(Alliance for Justice)

Mass Shooting Factsheet (Rockefeller Institute of Government)

Gun Ownership in the U.S. by State (World Population Review)

Gun Control: New Zealand Shows the Way(International Bar Association)

Firearms Reforms (New Zealand Ministry of Justice)

Activity #3: Affordability in the USA and Italy.

According to the Congressional Research Service (CRS), poverty has decreased in the United States from 15% 2010 to 11.1% in 2023, and in 2025 it is estimated to be 9.2%.  Poverty is measured both as the number of people below a defined income threshold of $31,200 for a family of four in 2025 (absolute poverty or below the poverty line) and as a quality of life issue for people living in a community. (relative poverty)  Source

Figure 1. Official Poverty Rate and Number of Persons in Poverty: 1959 to 2023

(poverty rates in percentages, number of persons in millions; shaded bars indicate recessions)

Unfortunately, poverty rates vary by sex, gender, and race. The current ‘affordability’ crisis in the United States is an example of relative poverty with many complex factors contributing to it.

Figure 4. Official Poverty Rates by Race and Hispanic Origin: 2023

A general guideline for budgeting housing expenses (rent or mortgage) is 33% of a household income, although expenses for rent and mortgage will vary by zip code.  The U.S. Census Bureau reported a per capita income of $43,289 (in 2023 dollars) for 2019-2023, while the Federal Reserve Bank reported a personal income per capita of $73,207 for 2024.  Personal income is the total earnings an individual receives from wages, salaries, investments and government benefits before income taxes are deducted. For your discussion consider the following based on $73,207 for one person. A family of four income with two working adults would be $146,414.

Federal Taxes (22%) $16,104

NJ State Taxes (5%) $3,660.

Housing (33%) $24,156 ($2,000 a month)

Food (10%) $7,300 ($140 per week)

Auto Transportation (15%) $10,980

Discretionary Spending (15%) $10,980

Consider the discretionary expenses in your family for phones, cable and internet, car lease or loan payments, vacation, gifts, savings, clothing, credit card debt, education, etc.

As incomes rise people spend more money on food, but it represents a smaller share of their income. In 2023, households with the lowest incomes spent an average of $5,278 on food representing 32.6% of after-tax income. Middle income households spent an average of $8,989 representing 13.5% of after-tax income) and the highest income households spent an average of $16,996 on food representing 8.1 percent of after-tax income.

The starting salary for many individuals with a four-year college education is about $70,000. Living in New Jersey is more expensive than living in many other states but for the purpose of discussion, we will use New Jersey as our reference.

The poverty rate in Italy is 9.8%, similar to the rate in the United States. However, the poverty rate for individuals below the poverty line (income level) is 5%.  Approximately half of the people in poverty are living in southern Italy.  Two contributing factors are the continuing effects from the government shutdown during the Covid-19 pandemic in 2020 and an aging population. These factors are related to Italy’s high unemployment rate of 6.8% (2024), which is higher than the 4.4% in the United States, weak GDP growth of less than 1%.  With a per capita income of $39,000 USD, Italy also has an affordability crisis.  The per capita income in Italy is about one-half of the per capita income in the United States.

In 2017, Italy approved a program of “Inclusion Income: which has been reformed twice since its adoption. Under the current (2024) “Income Allowance” about 50% of the population receives supplemental income. This program supports economic upward mobility through education and health care. Italy has partnered with the World Bank to support this program.  Another benefit of the program is that poverty is not increasing and will be significantly reduced over time.

Questions:

  1. Is the solution for affordability a higher minimum wage, lower taxes, price controls on food and housing, a guaranteed minimum income, or something else?
  2. Is it possible to lower the poverty rate through education and effective budgeting skills?
  3. Where do most Americans overspend their money and how can this best be corrected?
  4. Are transfer payments by the government (child care, Medicaid, Social Security, Unemployment Insurance, SNAP), wasteful or helpful?
  5. As a policy maker in the federal or state government, what is the first action you would take to address the affordability problem in the United States or in your state?
  6. How is Italy addressing the causes of poverty in addition to providing a guaranteed income to support people and families with basic needs?
  7. How is Italy financing its program and is it cost effective?
  8. Are tax cuts or tax credits an effective policy to assist people facing affordability issues?

7 Key Trends in Poverty in the United States (Peter G. Peterson Foundation)

United States Country Profile (World Bank)

Poverty in the United States: 2024(U.S. Census)

2025: Kids Count Data Book (Annie E. Casey Foundation)

Italy’s Poverty Reduction Reforms(World Bank)

Evolving Poverty in Italy: Individual Changes and Social Support Networks(Molecular Diversity Preservation International, MDPI)

ISTAT Report: Poverty and Inequalities in Italy(EGALITE)

Italy’s Fight Against Global Poverty (The Borgen Project)

Voter participation is based on many factors and the structure for electing representatives to Congress is complex and is related to the selection of electors in each state who vote for the president and vice-president every four years.  In the first 25 years of the 21st century, voting has changed significantly in the United States regarding the way citizens vote and in the definition of a legally registered voter. In this activity, you will discuss and analyze the issues of gerrymandering, voter participation, and voter eligibility in the United States and compare our process with voter participation in Greece.

Every 10 years, states redraw the boundaries of congressional districts to reflect population changes reported in the census. The purpose is to create districts/maps that elect legislative bodies that fairly represent communities. In 1929, the number of representatives for the population was set at 435. In the 1920s the debate about fairness was between urban and rural populations and today it is between racial and ethnic populations and political parties. This practice is ‘partisan gerrymandering’. In 2019, the Supreme Court ruled in Rucho v. Common Cause, that gerrymandered maps cannot be challenged in federal court.

Partisan gerrymandering is undemocratic when one party controls the process at the state level.  Cracking is a strategy that places some voters in districts that are a a distance from their immediate geographic area, making it very difficult for them to elect a candidate from their political party preference or racial or ethnic group. The majority of voters in New Jersey favored the Democratic Party making it difficult to establish districts that are fair to residents who favor the Republican Party. The issue of fairness may conflict with what is considered legal, fair, and constitutional. This complexity should engage students in a lively debate regarding its relationship to voter participation.

After the 2020 census, Republicans controlled the redistricting process in more states than Democrats.

In Illinois, the Democratic majority designed the congressional map limiting Republicans to just 3 of 17 seats. The use of algorithms and artificial intelligence are assisting the drawing of partisan districts.  South Carolina offers an example of racial bias in a reconfigured district in Charleston that removed many Black voters. However, when challenged under the Voting Rights Act of 1965, the new design was defended based on politics rather than race or ethnicity.

Section 2 of the Voting Rights Act has been challenged in the federal courts and amended in 1982. The decision in Village of Arlington Heights v. Metropolitan Hous Development Corporation (1977) is the current standard regarding a requirement that discrimination would actually harm minority voting strength. This standard is more difficult to prove than an expectation that it might be discriminatory. In 2013, a requirement of photo identification in North Carolina was challenged in Shelby County v. Holder but there was insufficient evidence to meet the standard of discrimination.

Voting is basically controlled by the states, although they must be in compliance with federal laws regarding elections for Congress and the President. Every state, except North Dakota, requires citizens to register to vote. Voter registration can help prevent ineligible voters from voting. The registration process generally includes identification to validate age, residency, citizenship, and a valid signature or state ID. Registration also prevents people from voting multiple times and someone stealing their ballot and submitting it.

There are different ways to measure voter participation regarding trends over time, in years when voters elected a governor or president, by age, race, or ethnicity, when a popular issue was on the ballot, etc.  In New Jersey, voter participation is generally less than 50% of the population.

In presidential elections, the voter turnout is between 60% and 70% on average. New Jersey has more than 70% of the population voting. Efforts to increase voter participation include early voting, mail-in ballots, and extended hours at polls.

Greece

Voter Participation:

Voter participation rates in the European Union are less than 50%. The democracies in most European Union countries have multiple political parties, unlike the United States which has two major parties. One of the reasons for the lower voter turnout is pessimism regarding both the candidates and issues. The voter participation rate in Greece is above the average of EU countries, and we will use this as our case study.

In 2025, Greece’s political scene is dominated by the center-right party, New Democracy. The largest opposition party is the SYRIZA, a left wing of progressive party. Some of the current problems or issues facing the people in Greece are high prices, health care, and public safety. The Russia-Ukraine War and the authoritarian government in Turkey are also concerns.

The survey revealed a significant and concerning trend, with recent elections showing record-high abstention rates—46.3% in the June 2023 national elections and 58.8% in the June 2024 European elections. A recent scandal in Greece also impacted the election involving a spyware tool, Predator, which has been associated with associates of the current Prime Minister, Kyriakos Mitsotakis. The illustration below is a guide to the numerous ideologies of the political parties in Greece. There are also restrictions on the freedom of the press, which fosters a credibility gap between the people and their government.

Questions:

  1. Why do you think the U.S. Supreme Court ruled that racial gerrymandering is illegal but partisan gerrymandering is permitted?
  2. In Rucho, the U.S. Supreme Court acknowledged that partisan gerrymandering may be “incompatible with democratic principles.” Do you agree or disagree? Explain your answer.
  3. Even though gerrymandering may benefit one political party over another, it is the people who elect the state representatives who draw the maps for the congressional districts. Is this practice fair or unfair?
  4. What is the best way to significantly increase voter participation in the United States, Greece, and other countries?
  5. Are the requirements for voter registration and proof of identification significant restrictions on voters?
  6. To what extent is voting in New Jersey fair for all eligible voters?

Election Guide: United States(International Foundation for Electoral Systems)

Election Guide: Greece(International Foundation for Electoral Systems)

United States(Freedom House)

The Permanent Apportionment Act of 1929(U.S. House of Representatives)

Freedom to Vote Act(Brennan Center for Justice)

Greece(Freedom House)

Why Greeks are staying Away from the Polls: Key Insights into the 2023-2024 Survey(Kapa Research)

Book Review-Britain Begins, by Barry Cunliffe

The author tells the story here of both England and Ireland because they cannot be separated easily.  Since the very beginning of humans’ time in that part of the world, both lands and cultures were connected.  It is that united history that leads the way in this incredible story of the sometimes icy, sometimes verdant northern reaches of civilization.

The reader will find here exciting and revealing chapters in the history of movements throughout the pre-historic, Celtic, Roman, Anglo-Saxon, Norman, and modern times of the isles.  There are clear and helpful illustrations, and there is enough information here to fill any semester-long course on the history of England, or rather Albion, as it was first called by those who were using formal language.

The author paints rich stories onto a canvas of what was once a chilly ice-covered region and which came to be a world power.  The author makes use of language, tools, science, history, and other major fields to tell about the different eras of the isles.

            The years of the Celts are very intriguing ones, indeed.  Cunliffe speaks of the idea that there were two entirely distinct waves of movement among them—including Iberia, Britain, Ireland, Scotland, Brittany, and Wales (pp. 248-249).  He also speaks to the idea that the Celts started in the north and later in one era migrated as a large group southward to Brittany (p. 428).  He has a number of additional theories related to this and other good examples of “movement.”

            Another very interesting idea is that language, culture, and tools were shared up and down the west coast of Europe and up between the isles—a sort of “Atlantic” civilization (p. 344) emerging over time among the Celts.  This explains linguistic and other hints pointing to migrations and movements up and down the coast—as opposed to some earlier notions of “Spanish” Celts trudging only northward to the further reaches of what came to be the UK.

            Cunliffe talks about the notion of Celts moving southward—starting in Scotland and Ireland and coming down into Europe along the Atlantic.  The author uses many different sorts of proof to advance this theory, at the same time he asks additional questions.    

Teachers will be able to use this big book in a variety of ways.  First and foremost, it is important personal reading for any teacher interested in social studies in general and in the history of English-speaking people specifically.  Understanding the history of northwest Europe is helpful in understanding the intricate connections among the Celts and Europeans, the British and the Irish, and the Scandinavian and Germanic stock among the English.

Another important use is for helping students understand the power of “movement” among peoples, the conflicts created and agreements forged, and the resulting cultural and linguistic differences and similarities resulting from peoples coming into contact.  The notion of movement relates also to the travelling ideas, tools, traditions, names, weapons, foods, trades, and books, later.  Any standards and benchmarks related to movement are connected through teacher use of this book as a reference and resource.

Yet another good use of this volume is a textbook for a college-level course in history, of course.  Because it covers so very much information, it could also be used as a summer reading project for advanced rising college freshman students needing timely non-fiction reading. 

Those four uses of the book can be joined by another one I propose here: coffee table teaser.  It would be interesting to set this in plain view and see who would pick it up and want to start reading it.  It has a beautiful green cover.  There are in fact many photos, drawings, and illustrations inside.  The cover just might draw in some unsuspecting readers.

The Devastating Effects of the Great Leap Forward

Right after the end of the Second World War, there was a new issue that took center stage that would essentially divide the entire world in half for the next several decades, that being the rise and spread of communism. Initially starting during the Russian Revolution in 1917, communism was starting to spread throughout the world due to the expanding influences of socialist ideologies that were turning many civilizations into communist states either under or at the very least inspired by the Soviet Union. Many other countries began seeing their own revolutions that would lead to a rebirth or major change within their government system, with one such example being China becoming a communist nation in 1949. The man who single handedly led the people of China into a new era in Chinese history and would become their new leader was Mao Zedong. During this time in the world, the cold war was in full effect with many countries not only falling to communism, but also the race to advance a nation’s status among the world. Mao Zedong saw that China had the full potential to grow stronger and faster in their economy, resources, and military. Starting in 1958, Mao Zedong would launch the Great Leap Forward, a movement that would focus on improving China’s stature as fast as possible to catch up with other global powers such as the Soviet Union and the United States. However, Mao’s ambitious methods and dedication to rapidly increasing production and change in China would majorly backfire. It isn’t a disputed claim that the Great Leap Forward did not work and was in fact a major failure under Mao Zedong’s leadership, but how bad were the repercussions from the Great Leap Forward? This paper will be discussing the extent of the failures and cost of human lives caused by the Great Leap Forward.

            The early stages of the Cold War consisted of the biggest, most powerful nations during that time displaying their strength, alliances, power, and influence over the world. One side of the conflict was the United States, which had significant military strength, government leadership, and made it their goal to get involved when necessary to prevent other countries from falling to communism. On the other side of the spectrum was the Soviet Union, who held control over nearly half of Europe (particularly the nations who were formerly occupied by the Axis powers during World War II), and was starting to spread their influences throughout several parts of Asia, including China. The leader of the newly founded People’s Republic of China, Mao Zedong, took notice of how fast the Soviet Union was able to rapidly catch up to the world, and that it was one of the biggest reasons towards what led the U.S.S.R. to be seen as major and powerful threats towards the rest of the world.

In the article Demographic Consequences of the Great Leap Forward in China’s Provinces by Xizhe Peng, Mao’s ambition to replicate what was done just earlier under Stalin’s five year plans is what would inspire his decision to speed up production throughout the country’s systems in order to quickly reach the level of and even outperform other countries1. “the late Chairman Mao Zedong proposed the goal for China of overtaking Great Britain in industrial production within 15 years…The general line of the Party that guided the Great Leap Forward was ‘Going all out, aiming high and achieving greater, faster, better, and more economical results in building socialism’” (Peng)1. Beginning in 1958, China wanted to reach certain levels of production in which Mao Zedong would see as great improvements for China in building strength within resources, such as industrializing faster in order to catch up on steel production in order to provide more tools, resources, and military equipment. Nearly all citizens would be put to work in order to help contribute towards the bigger collection, and while in practice this may seem like a good idea, there would only be problems that quickly emerged which eventually lead bad situations to catastrophic failures. 

            Poor decisions, bad thought processes, and poor actions that were made by Chairman Mao Zedong would heavily damage his own society and would be the somewhat direct cause of the deaths of millions of people. In the article Dealing with Responsibility for the Great Leap Famine in the People’s Republic of China by Felix Wemheuer, it discusses about who or what the Chinese communist party blamed for the disastrous results that the Great Leap Forward caused in the rise of famine and deaths throughout China, and many felt that Mao Zedong himself was solely responsible.2 For a short while, Mao Zedong was so stubborn that he refused to accept responsibility for what he caused to happen throughout China, instead wanting to blame other elements. However, due to pressure from his party and the massive amount of devastation that was now throughout China due to the failure of wanting to mass produce, Mao Zedong would eventually take some of the blame.

            The rapid growth that the Soviet Union was able to accomplish in just a short amount of time was a remarkable feat. The Soviet Union succeeded in becoming the industrial powerhouse that they were in the mid-20th century, and it was an impressive achievement for showing how any country can shift their goals and, within a short time period, can grow in the eyes of the world in terms of strength and power. In the period of world history where many countries were racing in the growth of their industry, military, and their level of dominance in the world, Mao Zedong was looking to use, explore, and expand upon similar strategies in order for China to join the arms race and to be seen as a powerful contender. Mao Zedong was clearly trying to follow in their footsteps in rapidly increasing their resources and financial stock, but just as how the Russians suffered through major push-back, the people of China would face similar, yet even greater push-back towards their economy. The article Causes, Consequences and Impact of the Great Leap Forward in China by Hsiung-Shen Jung and Jui-Lung Chen describes the detrimental damage the Great Leap Forward caused to China’s economy3. “After the Great Leap Forward, it took five years to adjust the national economy before it was restored to the 1957 level… economic losses of up to RMB 120 billion” (Hsiung-Shen and Jui-Lung)3. The nation was put under tremendous debt due to the poor planning and even worse results caused by Mao Zedong during the period of the Great Leap Forward, and to top it off, Mao’s stubbornness prevented him from taking any responsibility. Mao would even go on to make claims to purposely lead the people of China’s frustrations towards something else. It is stated within Hsiung-Shen Jung and Jui-Lung Chen’s article that “Mao remained reluctant to fully acknowledge the mistakes of the Great Leap Forward… he proposed the Party’s fundamental approach in the socialist stage, followed by a left-wing socialist educational campaign aimed at cracking down on the capitalist roaders,” (Hsiung-Shen and Jui-Lung)3. Just as Mao spread his ideologies and political messages throughout China to the people, he responded to the major hardship of a failed experiment he caused by trying to shift the blame onto those with the opposite economic and business philosophies of the Chinese Communist Party. The main cause of the detrimental shape of China’s economy due to major loss in food production, labor, and the loss of people’s lives was caused pushing the country too hard and too fast in Mao’s egotistical push for China to change and grow faster rather than taking his time for proper developmental growth and a fair distribution of the wealth, food, and supplies to his own citizens.

            The famine caused by the Great Leap Forward is one of just a few of the most infamous famines throughout history, such as the notorious Irish potato famine of the 19th century that killed over a million people. The total death toll of the famine caused in China during the Great Leap Forward was in the tens of millions, and as the article Mortality consequences of the 1959-1961 Great Leap Forward famine in China: Debilitation, selection, and mortality crossovers by Shige Song describes famines, “Famine is a catastrophic event” (Song)4.

This same article goes into a research study done by the author who has not only compromised data from the mortality rate and statistics during the Chinese famine, but also how it had such negative repercussions for the people and birth rates afterwards, such as a graph that shows the probability of survival decreasing4. The declining rate of survival not only affected very young kids and teens, but was affecting people years after the famine was over. The distribution of food supplies and decreasing amount of crops successfully growing made such a major dent in the health and lifespan of the average citizen in China, and that the famine itself began so quickly and rapidly within a short period of time. The Great Leap Forward only lasted for a few years, but its severe damages caused upon China would cause the people of China to continue to suffer for the following years to come.

            When thinking about how to measure the severity of an event or period of time, one may look at the total number of people that died who were directly linked to the occurrence. While this is certainly a fully reasonable statistic to use, in the case of a famine where the main cause of death is starvation, it can create the question of how much of a difference in food output really was there? The article The Great Leap Forward: Anatomy of a Central Planning Disaster by Wei Li and Dennis Tao Yang goes into many exact pieces of data and statistics regarding the output of grain being grown, the number of workers, and other elements of farm production5.

The Great Leap Forward lasted from 1958-1962, and within Li and Tao Yang’s grain output table in China, it shows that the total grain output during the years of the Great Leap Forward decreased by almost 100 million tons of grain, which is a loss of almost half of the total grain output just before the Great Leap Forward5. During this same time range, there was a noticeable decrease in workers, presumably dying due to the famine and harsh labor they were being put through. However, there was also an increase in both farm machinery and chemical fertilizer which would rapidly increase more in the years after the Great Leap Forward. Now while this can be considered a small victory for Mao’s intent on rapidly increasing and modernizing China’s agriculture, it did come at the major cost of both a famine, a decrease in crops being grown, and the loss of many Chinese farmers. The advanced farming tools, machinery, and techniques that did come from the Great Leap Forward still came at a major cost for the people and economy of China.

            While farming and grain production was a very big part in the overall progression of China’s resources, it wasn’t the only thing that Mao Zedong was trying to rapidly change and try to improve in order to make China a more powerful country. For most of history, China was primarily an agricultural society, but in the turn of the 20th century, many countries were beginning to not only industrialize in materials, resources, and military, but they were doing so at a very fast rate. The production of steel in China was to be taken much more seriously in order for China to catch up with the other world powers in terms of strength in industrialized resources, but just like with the negative consequences of rapidly changing grain production, Mao’s attempt to reform steel production in China also came with its own tolls. Going back to Wei Li and Dennis Tao Yang’s article The Great Leap Forward: Anatomy of a Central Planning Disaster, there is a statistics table done on the steel production and output in China during this time period, and it shows how big of a jump there was in steel and iron output within a very short amount of time5. China was able to triple their steel and iron output during the years of the Great Leap Forward, and the number of production units increased from tens of households to over two thousand households in just a few years5. However, during this same time gap, the number of provinces that allowed its people to have exit rights quickly went down as more and more provinces were quickly taking away rights from its own workers. Also, in the years after the Great Leap Forward, the output of steel and the number of production units would decrease by a noticeable amount, showing that it was only just a very short term benefit with major consequences5. This shows how quick, rapid, and big changes in the production of any resource within a country is not good for the other elements of that country, such as human rights and households with either food or enough materials and resources.

            The rapid increase in the demand for more food and a faster input of the growth of crops was not good in the long run for the people themselves, since it would cause a famine and leave millions upon millions of people to starve to death. Starvation is already a major issue for the population of one of the most populous countries in the world, but not only were the Chinese people affected negatively by the Great Leap Forward’s farming strategies, but the ground itself was severely damaged by the rapid changes and increased activity in China. The article Terrain Ruggedness and Limits of Political Repression: Evidence from China’s Great Leap Forward and Famine (1959–61) by Elizabeth Gooch explains how Mao’s farming campaign during the Great Leap Forward not only increased the mortality rate, but also damaged the dirt and soil of China6. There are statistics and graphs put together by Elizbeth Gooch in her article showing how because of the Great Leap Forward, there was an increased number in the amount of rugged terrain due to a vast increase of production, manufacturing and pollution that were caused by the Great Leap Forward6. A lot of the natural dirt, soil, and nutrients found within the farming grounds used for growing crops, plants, and foods were now blighted by the overproduction going on throughout China, and that there are even parallels between the death rate and the rate of soil becoming rugged. Mao Zedong wanted grain production, along with the production of other resources, to keep increasing, but due to his plans being executed in poor fashion and horrendous results, he was causing so much harm and damage towards the people of China and to China’s natural environment.

The number of crops being harvested is down, the natural land of China is dwindling, and there is a famine that has taken the lives of millions of people, but there’s a chance that this was all worth it in the long run for the growth and prosperity of China. The main purpose of Mao Zedong’s Great Leap Forward was for China to catch up with the other fully developed and powerful countries, and one of the biggest factors that can help with that is having an efficient, well running, and strong industrial production system. Ever since the Industrial Revolution began back in the 19th century, civilizations one by one have moved forward with their main economic resource production with the building of many factories that produced metal, steel, and other materials. This was also one of the biggest things to come out of the Soviet Union’s rapid growth in power in the early 20th century, and it was the strong industrial powerhouse that Joseph Stalin achieved for his country that Mao Zedong wanted to implement for China. Returning to Elizabeth Gooch’s Terrain Ruggedness and Limits of Political Repression: Evidence from China’s Great Leap Forward and Famine (1959–61), the growth of industrialization within China was perhaps one of the biggest accomplishments in the Great Leap Forward6. As the line graphs in Gooch’s article shows, industry increased by a very large amount during the years of the Great Leap Forward, although agriculture took a slight decrease during that same time frame, most likely due to many of the farmers being forced to work in the newly made factories and steel producing areas6. However, while looking at the rates of birth, growth, and death during these same few years, it becomes clear that the success of rapid Chinese industrialization came at the expense of the people themselves. The birth and growth rate took a big decrease during this time, and the rate of death tremendously increased6. While China did greatly benefit from the growth of industry and metal production, it was done at the cost of the health and safety of the people, along with attention being shifted away from agriculture and polluting the land.

Besides the main elements of the Great Leap Forward that were seen as major problems for the people of China, such as grain, steel, food, and other resources, there was also another very important element that is crucial for the survival of people and civilizations: water. In the Great Leap Forward, there were also campaigns for the industrial working, usage, and processing of water that in itself would cause even more issues for China. In the article The Great Leap Forward (1958-1961) Historical events and causes of one of the biggest tragedies in People’s Republic of China’s history by Adriana Palese, it describes the effects of the increase of water conservation projects from 25 million to 100 million, “inhuman working hours”, and that the the projects themselves weren’t a success with a cost at the expense of the people of China, as “most were useless and caused disasters some years after and other projects were simply abandoned and left uncompleted” (Palese)7. While there is mention of a decrease in flooding, this is once again an example of the many campaigns launched by Mao Zedong to improve and advance China with rapid industrialization, it did not at all work for the benefit of the people of China as a whole since the vast majority of people would suffer from this, along with the other failed campaigns during the Great Leap Forward.

While rapidly increasing the production of everything in China may be seen as good in concept, not only would it very negatively harm the people and the society of China, but sometimes these bold campaigns would actually make these situations worse than they were before. In Adriana Palese’s The Great Leap Forward (1958-1961) Historical events and causes of one of the biggest tragedies in People’s Republic of China’s history, she writes that “there were total shortages of other foods and other products such as cooking oil, sugar, thermos bottles, porcelain dishes, glasses, shoes, etc” (Palese)7. Not only could less food be made due to the dwindling number of crops being grown and an ongoing famine, but the manufactured goods of simple tools and supplies were faxing a big shortage and that it seems like the simple transactional market based economy of China for all goods and products was collapsing. Palese’s article even includes the wide percentage decrease in the output of agriculture and industrial goods that were happening during this time period7. The Great Leap Forward was rapidly deteriorating all elements that make up Chinese society, their economy, public morale, and way of life.

During one of the most crucial parts of the Great Leap Forward, Mao Zedong aimed to improve and increase the farming of grain since it was still seen as a very important part in actually feeding the population. However, a common enemy to the growth of any crops in a farming society is bugs, pests, and other insects since they can eat away at the growing crops. Mao Zedong had his own solution to this problem. In the article China’s deadly science lesson: How an ill-conceived campaign against sparrows contributed to one of the worst famines in history by Jemimah Steinfeld, “As part of the Four Pests campaign – a hygiene campaign against flies, mosquitoes, rats and sparrows – people were called upon to shoot sparrows, destroy their nests and bang pots and pans until the birds died of exhaustion” (Steinfeld)8. Anyone in China, men, women, and children were able to participate in the killing/removal of these target pests. While there were minor victories removing these pests, it overall came at a serious cost. One of these so called pests, the sparrows, were removed from the China’s agricultural society, but they were responsible for keep an even bigger threat towards crops away, locusts.8 Even after Mao Zedong had stop the killing of sparrows, the damage has already been dead, as this was one of the biggest reasons in what led to the famine spreading so rapidly and quickly through China, causing the deaths of millions of people in just a few short years.8 This was seen as why no matter the circumstances or beliefs, the ecosystem of any land should never be altered or drastically changed for the human need, since removing living creatures from their natural habitat and cycle would cause such a direct correlation between the farming/pest campaign to the millions of deaths caused by famine.

In conclusion, while the Great Leap Forward was initially seen as a progressive strategy to quickly advance Chinese society, it ultimately resulted in failure. Millions of people would die due to starvation caused by mass famines throughout the vast farmland of China. Many farmers were taken from their fields and forced to work in industrial yards in order to catch up on steel and metal resources for China. Mao Zedong was so blinded by the result of other nation’s rapid industrialization that he ignored what negative consequences can come of it, only this time China would suffer greater than any country has suffered before with little to nothing to show for it. Mao Zedong’s attempt in advancing China only set back the country, reduced morale and reduced support from his own party. The Great Leap Forward will go down in history as one of the most devastating eras in Chinese history due to the major count of the loss of life and how one of the oldest and culture rich societies in the world nearly destroyed themselves over ambitious goals due to the global affairs in the Cold War.

Endnotes

  1. Peng, Xizhe. “Demographic Consequences of the Great Leap Forward in China’s Provinces.” The China Quarterly 159 (1999): 430-453.
  2. Wemheuer, Felix. “Dealing with Responsibility for the Great Leap Famine in the People’s Republic of China.” The China Quarterly 216 (2013): 402-423.
  3. Jung, Hsiung-Shen, and Jui-Lung Chen. “Causes, Consequences and Impact of the Great Leap Forward in China.” Asian Culture and History 11, no. 2 (2019): 61–70.
  4. Song, Shige. “Mortality Consequences of the 1959–1961 Great Leap Forward Famine in China: Debilitation, Selection, and Mortality Crossovers.” Social Science & Medicine 71, no. 3 (2010): 551–558.
  5. Li, Wei, and Dennis Tao Yang. “The Great Leap Forward: Anatomy of a Central Planning Disaster.” Journal of Political Economy 113, no. 4 (2005): 840–77.
  6. Gooch, Elizabeth. “Terrain Ruggedness and Limits of Political Repression: Evidence from China’s Great Leap Forward and Famine (1959–61).” Journal of Comparative Economics 47, no. 4 (2019): 699–718.
  7. Palese, Adriana. The Great Leap Forward (1958–1961): Historical Events and Causes of One of the Biggest Tragedies in People’s Republic of China’s History. Bachelor’s thesis, Lund University, 2009.
  8. Steinfeld, Jemimah. “China’s Deadly Science Lesson: How an Ill-Conceived Campaign Against Sparrows Contributed to One of the Worst Famines in History.” Index on Censorship 47, no. 3 (September 2018): 6–8.

Jung, Hsiung-Shen, and Jui-Lung Chen. “Causes, Consequences and Impact of the Great Leap Forward in China.” Asian Culture and History 11, no. 2 (2019): 61–70.

Gooch, Elizabeth. “Terrain Ruggedness and Limits of Political Repression: Evidence from China’s Great Leap Forward and Famine (1959–61).” Journal of Comparative Economics 47, no. 4 (2019): 699–718.

Li, Wei, and Dennis Tao Yang. “The Great Leap Forward: Anatomy of a Central Planning Disaster.” Journal of Political Economy 113, no. 4 (2005): 840–77.

Palese, Adriana. The Great Leap Forward (1958–1961): Historical Events and Causes of One of the Biggest Tragedies in People’s Republic of China’s History. Bachelor’s thesis, Lund University, 2009.

Peng, Xizhe. “Demographic Consequences of the Great Leap Forward in China’s Provinces.” The China Quarterly 159 (1999): 430-453.

Song, Shige. “Mortality Consequences of the 1959–1961 Great Leap Forward Famine in China: Debilitation, Selection, and Mortality Crossovers.” Social Science & Medicine 71, no. 3 (2010): 551–558.

Steinfeld, Jemimah. “China’s Deadly Science Lesson: How an Ill-Conceived Campaign Against Sparrows Contributed to One of the Worst Famines in History.” Index on Censorship 47, no. 3 (September 2018): 6–8.

Wemheuer, Felix. “Dealing with Responsibility for the Great Leap Famine in the People’s Republic of China.” The China Quarterly 216 (2013): 402-423.

How Perot’s Economic Populism Nearly Broke the 2-Party System

The 1990s in America were a very impactful time in the country, both through pop culture, we had the World Wide Web coming into play, TV shows like Friends and Seinfeld, and Grunge music was taking off. However, we must not forget that America was impacted politically during the 1990s as well; we had the L.A Riots, the trial of O.J. Simpson, but arguably the most important, Ross Perot and his political antics of the 1990s,  and how he almost broke the two-party political system that had been in place for over 130 years at the time.

Ross Perot was a complete outsider politician who was primarily active in the 1990s in the United States as running for president twice in 1992 and 1996 with no prior office experience in running beforehand. “However, the election was to be complicated by a third-party bid from Ross Perot. Despite winning 19 million votes in the 1992 election, the maverick Texan aroused little public enthusiasm this time, but opinion polls nevertheless suggested that he could get more than 10 per cent of the national vote.[1]” Ross Perot ran as a political outsider rather than running as an independent candidate in 1992 and under his newly created political party called the Reform Party in 1996 which he received roughly 19% in 92 and 8.5% in 96. He was the first politician at the time to win such a high percentage of the vote for nearly 80 years as an independent or minor political party candidate. “Against most predictions, 19 percent of the vote went to Ross Perot, the best result for a candidate since Teddy Roosevelt.[2]” The election of 1992 was the highest percentage of a third-party candidate since 1912 to when Theodore Roosevelt received nearly 27% of the popular vote and won 6 states and 88 electoral votes. 

What, then, was the exact reason in the first place for Ross Perot? Why did he even run as a candidate in the first place? Ross Perot advocated for a contract with Americans which advocated his main political stances. “The Contract emphasized the Perot balanced issues of a Balanced federal budget, reform, and limiting American commitment to internationalism.[3]” So, with Perot’s basic policies in place and with both of his attempts to run for president in the books as failures in the long run, Perot’s attempts for running for president was a near break of the American two-party system that has only elected either a Democrat or Republican as the president of the United States since the election of President Millard Fillmore as a member of the Whig Party which was seen as a Proto-Republican party which competed with the Democratic party before their disbandment,  which had him win the presidential election of 1850. So, how exactly was Ross Perot able to achieve such great attempts to almost break the American political system that has been in place for nearly 150 years? The answer to this question was Perot’s outsider stance of economic populism that nearly broke the system through his staunch opposition to NAFTA, his virtually self-funded political campaign, and his businessman persona.

The reason that these topics affected the United States political system so much was that the United States, which for the most part in the last 130 years at this time period, has nearly made America into a 3-party system or even a multiparty system that would differentiate from how Americans believe that our modern-day two-party system feels flawed and uncompromising. Had this taken place, America would have had a significantly different style of government and economics in American society.  

Perot’s Background and Policies

Ross Perot was born in Dallas, Texas, the son of a cotton broker. He attended the US Naval Academy in l953 and was commissioned in 1953. Perot’s military experience undoubtedly helped him relate to ordinary Americans during a time when most males had similar experiences, given the commonality of the draft at this time.

Perot founded his first company, Electronic Data Systems, in 1962. The company primarily focused on Data Processing. However, the company’s stock increased tenfold when the US Government started to invest in the company for medicare analysis purposes. Eventually, in 1984, Perot sold his company for $2.4 billion, which in 2025 terms would be the equivalent of $6.1 billion. Perot eventually took a stance of not endorsing President H. W. Bush in 1992, nor Bill Clinton, due to their similar stances regarding the Gulf War.

Eventually Ross Perot chose to run as President in 1992 due to the significant unpopularity of the nominees of Bill Clinton and George H. W. Bush. Perot ran on a platform of populist platform that was morally focused on the benefits of the people rather than the benefits of the government. Perot prioritized the flaws of both Bill Clinton and George H.W. Bush. He highlighted allegations of sexual harassment against Bill Clinton during his time as governor of Arkansas. Perot attacked Bush for what he considered reckless spending during the Gulf War, and he used Bush’s quote of “no new taxes” on Americans, to attack him for hypocrisy when he approved tax hikes. Perot primarily used this form of politics from the economic strategy he had gained as he learned how to become a billionaire and to help his companies with NAFTA, government spending, and budgeting, as well as sticking to populist social positions at the time, like allowing gays in the military, supporting the death penalty, and supporting the war on drugs. He prioritized these stances during his campaign to help further increase his voter share.

There were a variety of differences between how the public viewed Perot versus established businesses and politicians who primarily endorsed one party already. Both politicians and companies that supported Democrats and Republicans at the time, to a large extent, both thought that Perot would act as a spoiler candidate towards the other party in both presidential elections of 1992 and 1996. However, this was proven as inaccurate, as Perot roughly had an equal amount of supporters that diverted from the Republican and Democratic parties. For the general public, most thought that Perot would go on to win the election in November. In July 1992, ABC News reported a poll that stated that Perot was going to win a plurality vote in the November election of every single state in the US, apart from Washington DC, and Massachusetts, going to Bill Clinton, and Oklahoma going to George H. W. Bush. Most people who threw their support behind Perot voted for him not because they believed he would spoil the election, but rather because they believed that Perot could actually become president and change the country.

The first part of Perot’s near success was his unique and somewhat populist position on the idea of NAFTA, or as acronymed the North American Free Trade Agreement. The idea behind NAFTA was originally started with George H. W. Bush, who created the idea of NAFTA in his final year of the White House in 1992, which was seen as beneficial to the American economy by the Republican Party to help increase free trade between Canada, the United States, and Mexico. “Bush left other foreign policies in an incomplete state. In 1992, his administration succeeded in negotiating a North American Free Trade Agreement (NAFTA), which proposed to eliminate tariffs between Canada, the United States, and Mexico.[4]

He ended up passing the ratification of NAFTA, which most Democrats, even in his party, were still reluctant to pass. “ Even when the administration focused on economics, it still floundered. House Democrats, in particular, believed Clinton made serious missteps in moving away from the party’s traditions. One of his first major moves was to oversee the ratification of the North American Free Trade Act, the agreement with Mexico and Canada that President Bush signed as a lame duck in December 1992. Many top Democrats, including House Majority Leader Dick Gephardt, vehemently opposed the trade agreement as a threat to American workers and the unionized workforce. But Clinton, who embraced many of the tenets of free-market economics, insisted on sticking with the agreement.[5]” The idea behind NAFTA for a majority of politicians who were elected to congress in the early to mid 1990s had support alongside the idea of NAFTA and its increased long-term benefits of free trade, both a majority of Republicans and Democrats supported the act even with the assumption that with the majority of Republicans in the House, that Clinton was giving into the opposing party. “He cobbled together a bipartisan coalition to pass the legislation that would implement the terms of the treaty in August 1993. With his own party’s congressional leaders standing against NAFTA, Clinton had to rely on his erstwhile enemies. Indeed, more Republicans voted to ratify the bill than Democrats: the House passed NAFTA by a vote of 234–200, with 132 Republicans and 102 Democrats in favor; the Senate approved it by a vote of 61–38, with 34 Republicans and 27 Democrats in favor. Though NAFTA represented a rare bipartisan victory for the president, it ultimately cost him the support of several important allies in Congress and other constituencies, while it gained him no new ones.[6]

NAFTA proved to be unpopular with many Americans, which was reflected in the significant decrease of President Clinton’s approval ratings (from 64% to nearly half that, 37%).  The general consensus on NAFTA was staunchly in opposition, believing the treaty would only take American jobs and decrease American wages. “Clinton and a great many economists maintained that breaking down trade barriers forced American exporters to become more efficient, thereby advancing their competitiveness and market share. But some corporations did move operations to Mexico, and pollution did plague some areas near the Mexican-American border. Labor leaders, complaining of the persistent stagnation of manufacturing wages in the United States, continued to charge that American corporations were not only outsourcing their jobs to Mexico (and other cheap labor nations) but were also managing to depress payrolls by threatening to move. When the American economy soured in 2001, foes of NAFTA stepped up their opposition to it.[7]” 

He was adamantly opposed to unhealthy spending with the government, and preferred to work what was best for the American populace. “In the second half of 1993, President Clinton hoped to restore his image as a moderate by pushing for some economic and political reforms. First, he worked in the summer of 1993 to address the federal debt built up in the Reagan and Bush eras. This had been an issue that third-party candidate Ross Perot made central in the 1992 campaign, and Clinton, burnishing his DLC credentials, wanted to demonstrate that Democrats could be the party of fiscal responsibility.[8]” 

Ross Perot, who ran as a minor political candidate capitalized on his proto-populist oppositional stance to NAFTA which as said before was widely viewed as unfavorable to most Americans, so Perot decided to capitalize on his anti-NAFTA stance to seem more favorable to Americans during and after the 1992 election with his quoted answer the NAFTA was cause a ‘giant sucking sound’ to American jobs. “Ross Perot’s campaign against NAFTA criticized the supposed (but in fact nonexistent) ‘giant sucking sound’ that would happen as NAFTA took jobs away from Americans.[9]”  To which, as said before, some American companies, in response to the establishment of NAFTA, did choose to move their companies to Mexico.

The amount of money just from PACs in 1996 was also extremely high as well, with Republicans doubling their funding and Democrats matching roughly the same amount as seen in the previous presidential cycle. “Democratic candidates raised 98.78 million dollars and Democratic committees raised 14.83 million dollars in the 1996 cycle. Republicans doubled that and raised 118.3 million dollars for Republican candidates and 9.12 million dollars from Republican Party committees.[10]” Perot’s campaign finance strategy differed from how Democrats and Republicans previously campaigned. Perot clearly knew that he would have a funding disadvantage as he proceeded to run for president, knowing both major parties would out-fund Perot by the tens of millions, so Perot needed to take the down-to-earth route in regards to funding.

Perot wanted to be seen as a pragmatic, populist, honest figure for the people. First, he used significant funding from his own billionaire wealth, and even took out loans to help fund his own campaigns for 1992 and 1996 Presidential elections. For his 1992 presidential campaign, Perot first ran as an independent candidate. “Texas billionaire Ross Perot bankrolled the final leg of his presidential campaign in part with loans after spending more than $56 million of his own money with no expectation of being repaid, reports showed Friday. Perot listed more than $4.5 million of the $13.9 million he directed to his campaign between Oct. 15 and Nov. 23 as loans received from or guaranteed by himself, the latest report to the Federal Election Commission showed.[11]” However, Ross Perot’s campaigning strategy did not rely only on his own money; Perot also accepted small donations from supporters that would only be allowed to contribute 5 dollars or less to his campaign to achieve more of a down-to-earth appeal. “After stating several times during the talk show that he was not interested in becoming a politician, Mr. Perot, 61 years old, finally hedged his refusal. “If voters in all 50 states put me on the ballot — not 48 or 49 states, but all 50 — I will agree to run, he said. He also said he would not accept more than $5 from each supporter. A week after appearing on the talk show Mr. Perot’s secretary, Sally Bell, said that she had received calls from people in 46 states promising support, as well as many $5 contributions.[12]” This was to further show how he would only run for president if the American people wanted him to run for president, rather than out of his own political aspirations for the time being.

Ross Perot tended to rely on his businessman persona to appear as a strong figure in his politics in economic terms. Ross Perot was able to capitalize on this point of view in two different ways. First, Ross Perot was able to break down his use of economic status by founding and originally running his own political party called the Reform party,  which he ran under only in 1996 while only running as an independent in 1992, his party advocated for the ideologies of Populism, Centrism, and Economic Conservatism, those politics were majorly supported by Americans for the time. He attempted to use his political party to motivate more people to vote for his campaign, now as he was seen as a broader political organization rather than running as an individual leader, however his attempted success in the 1996 under his newly created reform party didn’t achieve a higher percentage of the vote and rather only achieved around 40% of its previous popular vote result from 1992. “Perot ran again in 1996 as the Reform Party candidate and won 8% of the popular vote. With his challenges to mainstream politics, he emerged as one of the most successful third-party candidates in US history, with the most support from across the political spectrum since Theodore Roosevelt.[13]” 

He also endorsed the end of job outsourcing in his basic political views. “Appealing to resentment towards established politicians and advancing himself as a vital third candidate option, Perot campaigned on a platform that included balancing the federal budget, opposition to gun control, the end of job outsourcing, opposition to NAFTA, and popular input on government through electronic direct democracy town hall meetings. Perot challenged his supporters to petition for his name to appear on the ballot in all fifty states.14”  

With Perot’s strong oppositional stance to NAFTA, he recited that his points for his opposition to NAFTA or the North America Free Trade Agreement were that more American jobs would be considerably put in jeopardy compared to jobs in Canada and Mexico, and as stated more companies chose to move their companies to Mexico as the NAFTA ended up hurting their businesses more that ended up creating more holes for American jobs. It also showed that a significant number of Republican and Democratic politicians agreed with Perot, as almost half of all Congressional Republicans and a minority of Congressional Democrats, both in the Senate and the House, opposed these measures that were planned upon the creation of NAFTA, which ended up being endorsed and put in place by President Bill Clinton and a majority of Congress. This was in place until 2020 when President Donald Trump created the USMCA, or the United States/Mexico/Canada agreement, which continued most of the policies in NAFTA. 

Overall, Perot’s presidential campaigns relied on three main points to which his economics nearly broke the 2-party system, his oppositional position on NAFTA and moderate, centrist, fiscally conservative views, his unique form of campaign funding, and utilizing his business man skills and person and creation of his new party to have Americans take Perot’s campaign as a major winnable candidate.

“1996 Federal Campaign Spending up 33% from 1992; Total Candidate and Major Party Disbursements Top $2 Billion.” 1997. Public Citizen. January 30, 1997. https://www.citizen.org/news/1996-federal-campaign-spending-up-33-from-1992-total-candidate-and-major-party-disbursements-top-2-billion/.

“Britannica Money.” 2024. Www.britannica.com. April 1, 2024. https://www.britannica.com/money/Ross-Perot

Gerstle, Gary. 2022. The Rise and Fall of the Neoliberal Order: America and the World in the Free Market Era. New York, Ny: Oxford University Press.

Holmes, Steven A. “THE 1992 ELECTIONS: DISAPPOINTMENT — NEWS ANALYSIS an Eccentric but No Joke; Perot’s Strong Showing Raises Questions on What Might Have Been, and Might Be.” The New York Times, 5 Nov. 1992,

www.nytimes.com/1992/11/05/us/1992-elections-disappointment-analysis-eccentric-but-no-joke-perot-s-strong.html.

Levin, Doron P. 1992. “THE 1992 CAMPAIGN: Another Candidate?; Billionaire in Texas Is Attracting Calls to Run, and $5 Donations.” Archive.org. March 7, 1992.https://web.archive.org/web/20190427005459/https://www.nytimes.com/1992/03/07/us/1992-campaign-another-candidate-billionaire-texas-attracting-calls-run-5.html.

Lichtenstein, Nelson, and Judith Stein. 2023. A Fabulous Failure. Princeton University Press

Los Angeles Times. 1992. “Perot Spent $56 Million of Own, $4.5 Million in Loans on Race.” Los Angeles Times. December 5, 1992. https://www.latimes.com/archives/la-xpm-1992-12-05-mn-1144-story.html

New York Times. (1992). The. 1992. “THE 1992 CAMPAIGN: The Media; Perot’s 30-Minute TV Ads Defy the Experts, Again.” Nytimes.com. The New York Times. October 27, 1992. https://www.nytimes.com/1992/10/27/nyregion/the-1992-campaign-the-media-perot-s-30-minute-tv-ads-defy-the-experts-again.html.

Norris, P. (1993). The 1992 US Elections [Review of The 1992 US Elections ]. Government and Opposition, 28(1), 51–68. “Political Action Committees (PACs).” 2024. OpenSecrets. 2024. https://www.opensecrets.org/political-action-committees-pacs/2024.

Patterson, James T. 2007. Restless Giant : The United States from Watergate to Bush v. Gore. New York/Oxford: Oxford University Press.

Savage, Robert L. “Changing Ways of Calling for Change: Media Coverage of the 1992 Campaign.” American Review of Politics, vol. 14, 1 July 1993, p. 213,

https://doi.org/10.15763/issn.2374-7781.1993.14.0.213-228.

Stiglitz, Joseph. 2015. The Roaring Nineties. Penguin UK. 

Stone, Walter J., and Ronald B. Rapoport. 2001. “It’s Perot Stupid! The Legacy of the 1992 Perot Movement in the Major-Party System, 1994–2000.” Political Science & Politics 34 (01): 49–58. https://doi.org/10.1017/s1049096501000087

 “Third-Party Reformers.” n.d. Digital Public Library of America. https://dp.la/exhibitions/outsiders-president-elections/third-party-reform/ross-perot.

Walker, Martin. 1996. Review of The US Presidential Election, 1996. International Affairs 72 (4): 657–74. https://www.jstor.org/stable/2624114


[1] Martin, Walker,. 1996. Review of The US Presidential Election, 1996. International Affairs 72 (4): pg. 669

[2] Pippa, Norris. (1993). The 1992 US Elections [Review of The 1992 US Elections]. Government and Opposition, 28(1), 51

[3] Walter J, Stone,., and Ronald B. Rapoport. 2001. “It’s Perot Stupid! The Legacy of the 1992 Perot Movement in the Major-Party System, 1994–2000.” Political Science & Politics 34 (01): pg 52 https://doi.org/10.1017/s1049096501000087.  

[4] James T, Patterson. 2005. Restless Giant : The United States from Watergate to Bush v. Gore. New York: Oxford University Press. pg 201-202

[5] James Patterson 2005 Restless Giant pg 208-209

[6] James Patterson 2005 Restless Giant pg 209

[7] James Patterson 2005 Restless Giant pg 334

[8] Kevin Kruse and Julian Zelizer 2019 Fault Lines pg 209

[9] Joseph E, Stiglitz. 2004. The Roaring Nineties : Seeds of Destruction. London: Penguin. pg 203

[10] (“Political Action Committees (PACs)” 2024)

[11] Archives, L. A. Times. 1992. “Perot Spent $56 Million of Own, $4.5 Million in Loans on Race.” Los Angeles Times. December 5, 1992. https://www.latimes.com/archives/la-xpm-1992-12-05-mn-1144-story.html.

[12] (Archives 1992) LA Times. December 5, 1992

[13] “Third-Party Reformers.” n.d. Digital Public Library of America. https://dp.la/exhibitions/outsiders-president-elections/third-party-reform/ross-perot. Pg 1 14 (“Third-Party Reformers,” n.d.) pg 2

Teaching the Black Death: Using Medieval Medical Treatments to Develop Historical Thinking

Few historical events capture students’ attention as immediately as the Black Death. The scale of devastation, the drama of symptoms, and the rapid spread of disease all make it an inherently compelling topic. But beyond the shock value, medieval responses to the plague open the door to something far more important for social studies education: historical thinking. When students first encounter medieval cures like bloodletting, vinegar-soaked sponges, herbal compounds like theriac, or even the infamous “live chicken treatment”, their instinct is often to laugh or dismiss the past as ignorant. Yet these remedies, when studied carefully, reveal a medical system that was logical, coherent, and deeply rooted in the scientific frameworks of its time. Teaching plague medicine provides teachers with a powerful opportunity to challenge presentism, develop students’ contextual understanding, and foster empathy for people whose worldview differed radically from our own. Drawing on research into plague treatments during the Black Death, this article offers teachers accessible background knowledge, addresses common misconceptions, and provides practical strategies and primary-source approaches that use medieval medicine to strengthen disciplinary literacy and historical reasoning in the social studies classroom.

Understanding medieval plague medicine begins with understanding humoral theory, the dominant medical framework of the period. Medieval Europeans believed that the body’s health depended on maintaining balance among the four humors: blood, phlegm, yellow bile, and black bile (Leong, 2017). Illness occurred when these fluids fell out of proportion, making the plague less a foreign invader and more a catastrophic imbalance. Bloodletting was one of the most common responses, meant to “draw off the poisoned blood” and reduce fever. Other strategies included induced vomiting or purging, both intended to remove corrupted humors from the body. Treatises such as Bengt Knutsson’s The Dangers of Corrupt Air emphasized both prevention and treatment through the regulation of sensory experiences, most famously through the use of vinegar (Knuttson, 1994). Its sharp and purifying qualities made it useful for cleansing internal humors or blocking the inhalation of dangerous air. Though these methods seem foreign to modern readers, they reflect a rational system built upon centuries of inherited medical theory, offering students a clear example of how people in the past interpreted disease through the frameworks available to them.

Herbal and compound remedies were equally important in medieval plague treatment and worked in tandem with humoral correction. One of the most famous was theriac, a complex blend of dozens of ingredients including myrrh, cinnamon, opiates, and various roots (Fabbri, 2007). Practitioners believed that theriac fortified the heart and expelled harmful humors, with its complexity symbolizing the combined power of nature’s properties. Other remedies included ginger-infused ale, used to stimulate internal heat, or cupping, which involved applying heated horns or glasses to the skin in order to draw corrupted blood toward the surface. These treatments show the synthesis of classical medical texts, practical experimentation, and local knowledge. When teachers present these treatments in the classroom, students will begin to see medieval medicine not as random or superstitious, but as a sophisticated system shaped by observation, tradition, and reason.

Medieval healing also extended into the emotional and spiritual realms, reflecting the belief that physical and internal states were interconnected. Chroniclers described how fear and melancholy could hasten death, leading many to encourage celebrations, laughter, and community gatherings even during outbreaks. A monastic account from Austria advised people to “cheer each other up,” suggesting that joy strengthened the heart’s resilience. At the same time, religious writers like Dom Theophilus framed plague as both a physical and spiritual crisis, prescribing prayer, confession, and communion as essential components of healing. These practices did not replace medical treatment but complemented it, emphasizing the medieval tendency to view health holistically. Introducing students to these lifestyle-based treatments helps them recognize the complexity of medieval worldviews, where spirituality, emotion, and physical health were deeply intertwined.

Because plague remedies can appear unusual or ineffective to modern students, several misconceptions tend to arise in the classroom. Many students initially view medieval people as ignorant or irrational, evaluating the past through the lens of modern scientific understanding. When teachers contextualize treatments within humoral theory and medieval medical logic, students begin to appreciate the internal coherence of these ideas. Another misconception is that medieval treatments never worked. While these remedies could not cure the plague itself, many offered symptom relief, soothed discomfort, or prevented secondary infections, revealing that medieval medicine was neither wholly ineffective nor devoid of empirical reasoning (Archambeu, 2011). Students also often assume that religious explanations dominated all responses to disease. Examining both medical treatises and spiritual writings demonstrates that medieval responses were multifaceted, blending empirical, experiential, and religious approaches simultaneously. These insights naturally support classroom strategies that promote historical thinking.

Inquiry-based questioning works particularly well with plague treatments. Asking students, “Why would this treatment make sense within medieval beliefs about the body?” encourages them to reason from evidence rather than impose modern judgments. Primary-source stations using texts such as The Arrival of the Plague or The Treatise of John of Burgundy allow students to compare remedies, analyze explanations of disease, and evaluate the reliability and purpose of each author (Horrox, 1994). A creative but historically grounded activity involves inviting students to “design” a medieval plague remedy using humoral principles, requiring them to justify their choices based on qualities such as hot, cold, wet, and dry. Such exercises not only build understanding of the medieval worldview but also reinforce core social studies skills like sourcing, contextualization, and corroboration. Even broader reflections, such as comparing medieval interpretations of disease to modern debates about public health, can help students think critically about how societies make sense of crisis.

Teaching plague medicine carries powerful instructional implications. It fosters historical empathy by encouraging students to see past actions within their cultural context. It strengthens disciplinary literacy through close reading of primary sources and evaluation of evidence. It challenges misconceptions and reduces presentism, helping students develop a mature understanding of the past. The topic also naturally lends itself to interdisciplinary thinking, drawing connections between science, history, culture, and religion. Ultimately, medieval plague treatments offer teachers a rich opportunity to show students how historical interpretations develop through careful analysis of belief systems, available knowledge, and environmental conditions.

The Black Death will always capture students’ imaginations, but its true educational value lies in what it allows them to practice: empathy, critical thinking, and contextual reasoning. By reframing medieval treatments not as bizarre relics but as rational responses grounded in their own scientific traditions, teachers can transform a sensational topic into a meaningful lens for understanding how people in the past made sense of the world. In doing so, plague medicine becomes more than an engaging subject; it becomes a model for how historical study can illuminate the logic, resilience, and humanity of societies long removed from our own.

A fifteenth-century treatise on pestilence. (1994). In R. Horrox (Ed. & Trans.), The Black Death (pp. 193–194). Manchester University Press.

Archambeau, N. (2011). Healing options during the plague: Survivor stories from a fourteenth century canonization inquest. Bulletin of the History of Medicine, 85(4), 531–559. http://www.jstor.org/stable/44452234 

Fabbri, C. N. (2007). Treating medieval plague: The wonderful virtues of theriac. Early

Science and Medicine, 12(3), 247–283. http://www.jstor.org/stable/20617676 

Knutsson, B. (1994). The dangers of corrupt air. In R. Horrox (Ed. & Trans.), The Black Death (pp. 175–177). Manchester University Press.

Paris Medical Faculty. (1994). The report of the Paris medical faculty, October 1348. In R. Horrox (Ed. & Trans.), The Black Death (pp. 158–163). Manchester University Press.

Heinrichs, E. A. (2017). The live chicken treatment for buboes: Trying a plague cure in medieval and early modern Europe. Bulletin of the History of Medicine, 91(2), 210–232. https://www.jstor.org/stable/26311051 

Leong, E., & Rankin, A. (2017). Testing drugs and trying cures: Experiment and medicine in medieval and early modern Europe. Bulletin of the History of Medicine, 91(2), 157–182. https://www.jstor.org/stable/26311049 

The Plague in Central Europe. (1994). In R. Horrox (Ed. & Trans.), The Black Death (pp. 193–194). Manchester University Press. de’ Mussis, G. (1994). The arrival of the plague. In R. Horrox (Ed. & Trans.), The Black Death  (p. 25). Manchester University Press.

The treatise of John of Burgundy. (1994). In R. Horrox (Ed. & Trans.), The Black Death (pp. 184–192). Manchester University Press.

Theophilus, D. (1994). A wholesome medicine against the plague. In R. Horrox (Ed. & Trans.), The Black Death (pp. 149–153). Manchester University Press.

The transmission of plague. (1994). In R. Horrox (Ed. & Trans.), The Black Death (pp. 182–184). Manchester University Press.

Combating and Treating the Black Death

Imagine a deadly disease ripping through your town and the only hope of survival is in the hands of health workers who rely on established medical knowledge and practical methods in desperate attempts to save your lives. During the late Medieval period between 1347 and 1351, the Black Death stirred chaos across Europe including cities in France and Italy, killing millions of people who were in its deadly path. It brought out great fear and uncertainty in surviving resulting in the use of a variety of treatment methods, blending these practices with religious beliefs and supernatural beliefs. These different approaches reveal just how much medical knowledge at the time was shaped by pre-established knowledge, traditional theories, and practical methods from the past, raising the question: How did health workers attempt to treat and combat the plague during the Medieval period? During the medieval period, health workers attempted to combat and treat the Black Death by mixing established medical knowledge and practical methods together. Methods like theriac, bloodletting, air purifications and experimental treatments from the past like imperial powder, put together traditional healing treatments with evolving practices. This approach will show how past medical knowledge and evolving practices were used by health workers to treat and combat the Black Death. This will also show both the intellectual growth and evolution of medical treatments and methods. 

These health workers were very diverse in their levels of medical knowledge; some were volunteers, nuns, inexperienced physicians and barber surgeons. Even though they had diverse levels of expertise, they all played the biggest role in the plague, giving treatments to those who fell victim to the Black Death. This approach highlights the play between practical methods, established medical knowledge, adaptation, and preventive measures in combating the plague. 

Health workers were trying to fight back at the Black Death using practical methods like bloodletting, which was brought up from past medical knowledge and public health rules growing at the time. As health workers were desperately trying to deal with the crisis the Black Death was bringing, the use of practical and hygienic measures were used as an attempt to help those falling ill. One attempt that was seen in treating the plague was the process of bloodletting. Neil Murphy’s article, “Plague Ordinances and the Management of Infectious Diseases in Northern French Towns, c.1450-c.1560,” goes into detail of the developments of public health systems and the ordinances that shaped the responses to the plague.[1] Murphy is arguing that these ordinances emerged from evolving strategies like those in Italy, were connected to cultural and intellectual contexts bringing together medical theories with practical actions. Murphy in this emphasizes the practice of bloodletting, which was performed by barber surgeons or surgeons. This procedure was aimed at removing contaminated blood, slowing down the disease in the body.2 This method shows the connection between the medical theories at the time and practical actions taken, which were shaped by the intellectual contexts of this time.

Past strategies were seen greatly in these attempts along with bloodletting, another we see is attempts in changing emotional and medical practices through survival stories. From survivors’ stories, we can understand attempts made during this time to stop the plague, especially through health workers trying to help based on past medical knowledge and practical treatments similarly to past knowledge on bloodletting. Nicole Archambeau in “Healing Options during the Plague: Survivor Stories from a Fourteenth-Century Canonization Inquest”, shows great emphasis in the intellectual context of medicines and its “miracles” on those it healed, showing how beliefs and medical practices intersected to shape the responses to the plague.[2] At this time, some people wanted healing methods combining religious and practical approaches, including methods of emotional changes. Archambeau argued that “Witnesses had healing options’… their testimonies reveal a willingness to try many different methods of healing, often all at once”[3] This shows how survivors were relying on any type of resources from family, friends and health workers connecting their beliefs and intellectual medical practices at this time. Health workers adapted their methods of helping based on the resources that were available as well as on the patients’ wants and needs. This highlights the adaptability and flexibility these health workers had for their patients and their commitment to help treat those suffering during this time of horror and devastation.

Similarly, through the past medical knowledge, health workers relied on giving treatments that blended intellectual medical knowledge with practical methods to attempt treating the plague. Another piece to these treatments we see is a compound called theriac. Christiane Nockels Fabbri’s article “Treating Medieval Plague: The Wonderful Virtues of Theriac,” shows the use of Theriac, a compound that has been used as an antidote since ancient times, being a crucial treatment during the Black Death. Fabbri argues that the use of Theriac in these treatments demonstrates how health workers applied this traditional remedy to this new disease showing conservatism of these medical practices. Fabbri states how “In plague medicine, theriac was used as both a preventive and therapeutic drug and was most likely beneficial for a variety of disease complaints.”[4] This shows how health workers relied on this because of its practical efficiency and its intellectual and cultural significance in the past.

From these three sources, it is clear to see how they all were similar in how health workers tended to link past medical knowledge with their practical methods to help suffering, showing how they attempted to go about treating the plague. Treatments like bloodletting, personal wanted miracle methods and theriac were just a few of the ways they attempted to help those who got sick. My analysis highlights how these treatments were based on public health measures that were put into cities to help maintain and stop the spreading of the plague. Ordinances aimed to help isolate the disease and keep calm over the chaos that the plague was bringing into town. These helped to create a framework that helped health workers approach how they would attempt to treat those who fell sick.

One of the main and well-known treatments given by health workers during this time was a drug called theriac. This type of medicine was extremely popular in its effectiveness and was wanted by victims once they fell ill or were scared that they would fall ill. In “The real Theriac – panacea, poisonous drug or quackery?” by Raj, Danuta, Katarzyna Pękacka-Falkowska, Maciej Włodarczyk and Jakub Węglorz, talks about this compound and its ability to remove diseases and poison from the body and how it was a well-known and used drug during the medieval period; “Consequently, Theriac was being prepared during epidemics, especially the plague  (Black Death), in large quantities as a form of emergency medicine (Griffin, 2004).”[5] Relying on theriac as a direct treatment, health workers showed their commitment to using this accessible great drug that was well known, to make people confident that this treatment would work during a time of uncertainty and devastation.

Correspondingly, we see another direct form of treatment that health workers used to treat those who had the plague, bloodletting. Health workers would prick veins to do this.  This was a way of extracting bad blood from the body to restore its balance. We see this in document 62 “The Treatise of John of Burgundy, 1365” written by John Burgundy. It projects the practical medical knowledge at the time that health workers were applying to treat those who have been hit with the Black Death. Burgundy continues to talk about the use of bloodletting, informing that “If, however, the patient feels prickings in the region of the liver, blood should be let immediately from the basilic vein of the right arm (that is the vein belonging to the liver, which is immediately below the vein belonging to the heart)”[6]. He is giving a specific technique to address this issue, giving us a practical method of treatment that shows how health workers used these hands-on treatments to combat the plague

These two methods were greatly known during the medieval period. They both offered hope to those who were desperate and wanting treatment so they would not die. These treatments at this time offered the feeling of control to the scary situation for its victims and gave a sense of hope to get better. Knowing theriac and bloodletting were used as treatment for victims, it helped to feel less overwhelmed and made it seem like health workers would be the redeeming feature to their deadly crisis.

Established Medical Knowledge

During the medieval period, health workers were able to recognize and understand that miasma, contaminated air, was the main causing factor of why the Black Death was spreading so much and killing everyone in its path. Due to this understanding, they implemented environmental purification strategies to end exposure of miasma. “The dangers of corrupted air” by Bengt Knutsson, shows great emphasis on this fear of the contaminated air and goes into methods that were used and done to cleanse the space and environment people were living in. A practice that health workers implemented to stop the miasma from taking over was to “Therefore let your house be clean and make clear fire of wood flaming. Let your house be made with fumigation of herbs, that is to say with leaves of bay tree, juniper…”[7] while also explaining opening windows at certain times and remedies if you feel sick.[8] These techniques reflect how established medical knowledge can be used in order to come up with ways to treat and combat the plague. Including the purification methods into the plague’s prevention by health workers, they were able to adapt with their knowledge on air quality and turn that into strategies to combat the Black Death. 

Through the fears of the Black Death, health workers were relying on past medical knowledge, practices and strategies to manage the spread of this disease and to treat those who have been infected. The “Ordinances against the spread of plague, Pistoia, 1348” elaborates on how these workers used their past medical knowledge to reduce the spread and create a safer environment to go about treatments. This chronicler explains limiting your exposure to those who are ill by completely restricting people and patients’ interactions.[9] This will provide health  workers with the safest opportunity to apply these treatments, like bloodletting or giving theriac, in a more controlled environment. This approach further reflects the combination of traditional medical knowledge and practical adaptations so then health workers could attempt to combat the plague’s destruction.

Health workers relied heavily on past medical knowledge and theories during this time of uncertainty to combat the Black Death, bringing together adaptations with established knowledge. The understanding of bad air being the cause helped them greatly in purification techniques like burning the herbs to mask the miasma. The ordinances stressing the need for isolation and restriction for interactions to give a safer environment for the health workers showed their adaptability to meet the demands of the plague as well as their preservation of historical medical theories of those in the past doing it. This shows the continuity and innovation that came during this period when trying to understand and combat the plague. 

One way that health workers attempted to treat and combat the plague was through the development of treatments that were adapted from past medical knowledge. An example of this was imperial powder, in John Burgundy’s “The Treatise of Burgundy, 1365” being known as a “powerful preventative” that was thought of to be stronger than theriac. Burgundy explains how “gentile emperors used it against epidemic illness, poison and venom, and against the bite of serpents and other poisonous animals”[10] This powder was made from some herbs like St John’s wort, medicinal earth from Lemnos and dittany which shows us the diverse ingredients to kill off poison that were believed from the past and venoms that were inside the body. To use this powder, they would either apply it directly to the skin or by mixing it with a drink like wine for ingestion purposes. This shows the health workers willingness to experiment with past medical treatments to adapt it to the current plague they were going through, to find a better treatment for the Black Death. 

Looking past medical treatments, to do them, health workers were implementing strict isolation strategies in order to combat and limit the spread of plague while also keeping the environment safe in order to treat those who fell ill. “The plague in Avignon” by Louis Heyligen shows emphasis on this isolation of staying away from neighboring areas and people so then health workers can do what they needed to do to help. This was an attempt made to manage the spreading of the disease through the town.  It states how “…avoid getting cold, and refrain from any excess, and above all mix little with people – unless it be with few who have healthy breath; but it is best to stay at home until the epidemic has passed”[11].  Having this advising gives the reflection of the public health strategies that were employed in the cities being tied to medical treatments, because limiting the exposure would directly allow more health workers to safely treat those who were sick and in need of treatments. Trying to minimize contact with one another was a great strategy in controlling the transmission to get the disease to slow down in spreading. From the emotions brought on from the Black Death, it shows the willingness people were taking, to make it safer conditions outside for families and health workers.

Combining both the experimental treatments like imperial powder with the isolation policies, it opened the view of just how much health workers were combining the preexisting medical knowledge with their preventative measures to successfully combat the plague while treating it. Having this adaptability further influences medical practices and lays a greater foundation for future prevention strategies for diseases that come. 

In conclusion, we have explored several ways in which health workers attempted to treat the plague and combat it through pre-stablished medical knowledge and practical methods. These health workers, being remarkably diverse in who they were, applied many strategies and methods that were used including enforcing strict public health ordinances, the practice of bloodletting by barber surgeons, air purifications, use of Theriac and experimenting with the use of the imperial powder to attempt treating the plague. These health workers showed great standing adaptability to what was going on while building off the existing knowledge of medical treatments to address the deadliest crisis in history. This analysis gives a deeper understanding of medical knowledge and how they used their past resources to understand and try to save those who contracted this disease. Also, this shows how these attempts were deeply rooted into the intellectual history of these times through health workers drawing information from past medical scholars and past knowledge to gain a better understanding in how to perform their practices and methods. Involving themselves in this intellectual history, they were putting a building block on top of centuries of their medical knowledge through experimenting with it and adding new responses to how they attempted to treat their new disease. These contributions to the Black Death only strengthens our understanding of past medical history during the Black Death and past centuries. 

Archambeau, Nicole. “Healing Options during the Plague: Survivor Stories from a Fourteenth-Century Canonization Inquest.” Bulletin of the History of Medicine 85, no. 4 (2011):  531–59. http://www.jstor.org/stable/44452234.

Burgundy, “The Treatise of Burgundy, 1365” pp.184-193

Chiappelli, A. “Ordinances against the Spread of Plague, Pistoia, 1348.” pp 194- 203

Fabbri, Christiane Nockels. “Treating Medieval Plague: The Wonderful Virtues of Theriac.” Early Science and Medicine 12, no. 3 (2007): 247–83. Retrieved from http://www.jstor.org/stable/20617676. Heyligen, “The Plague in Avignon.” pp.41-45

Horrox, R., ed. The Black Death (Manchester: Manchester University Press, 1994).

Knutsson, “The dangers of corrupted air” pp.173-177  

 Murphy, Neil. “Plague Ordinances and the Management of Infectious Diseases in Northern French Towns, c.1450–c.1560.” In The Fifteenth Century XII: Society in an Age of Plague, edited by Linda Clark and Carole Rawcliffe, 139-160. Woodbridge: Boydell & Brewer, 2013

Raj, Danuta, Katarzyna Pękacka-Falkowska, Maciej Włodarczyk, and Jakub Węglorz. 2021.  “The Real Theriac – Panacea, Poisonous Drug or Quackery?” Journal of          Ethnopharmacology 281 (December): N.PAG. doi:10.1016/j.jep.2021.114535.   


[1] Murphy, Neil. “Plague Ordinances and the Management of Infectious Diseases in Northern French Towns, c.1450–c.1560.” In The Fifteenth Century XII: Society in an Age of Plague, edited by Linda Clark and Carole Rawcliffe, 139-160. Woodbridge: Boydell & Brewer, 2013 2 Murphy, 146.

[2] Archambeau, Nicole. “Healing Options during the Plague: Survivor Stories from a Fourteenth Century Canonization Inquest.” Bulletin of the History of Medicine 85, no. 4 (2011): 531–59. http://www.jstor.org/stable/44452234.

[3] Archambeau, 537.

[4] Fabbri, Christiane Nockels. “Treating Medieval Plague: The Wonderful Virtues of Theriac.” Early Science and Medicine 12, no. 3 (2007): 247–83. http://www.jstor.org/stable/20617676.  

[5] Raj, Danuta, Katarzyna Pękacka-Falkowska, Maciej Włodarczyk, and Jakub Węglorz. 2021. “The Real Theriac – Panacea, Poisonous Drug or Quackery?” Journal of Ethnopharmacology 281 (December): N.PAG.

[6] Burgundy, “The Treatise of Burgundy, 1365” in The Black Death, ed. And trans. Rosemary Horrox (Manchester: Manchester University Press, 1994), 189.

[7] Knutsson, “The dangers of corrupted air” p.176

[8] Knutsson, “The dangers of corrupted air,” p.176

[9] Chiappelli, “Ordinances against the spread of plague, Pistoia, 1348,” p. 195 

[10] Burgundy, “The Treatise of Burgundy, 1365” p.190

[11] Heyligen, “The Plague in Avignon” p. 45

Unseen Fences: How Chicago Built Barriers Inside its Schools

Northern public schools are rarely ever centered in national narratives of segregation. Yet as Thomas Sugrue observes, “even in the absence of officially separate schools, northern public schools were nearly as segregated as those in the south.”[1] Chicago Illustrates this, despite the Jim Crow laws, the city developed a racially organized educational system that produced outcome identical to those segregated in southern districts.  The city’s officials celebrated equality while focusing on practices that isolated black students in overcrowded schools. The north was legally desegregated and was not pervasive but put into policies and structures of urban governance.

This paper argues that Chicago school segregation was intentional. It resulted from a coordinated system that connected housing discrimination, political resistance to integration, and targeted policies crafted to preserve racial separation in public schools. While Brown v. Board of Education outlawed segregation by law, Chicago political leaders, school administration, and networks maintained it through zoning, redlining, and administrative manipulation. Using both primary source, newspapers NAACP records, and a great use of historical scholarship, this paper shows how segregation in Chicago was enforced, defended, challenged, and exposed by the communities that it harmed.

The historical context outlined above leads to several central research questions that guide this paper. First, how did local governments and school boards respond to the Brown v. Board of Education decision, and how did their policies influence the persistence of segregation in Chicago? Second, how did housing patterns and redlining contribute to the continued segregation of schools? Third, how did the racial dynamics of Chicago compare to those in other northern cities during the same period?

These questions have been explored by a range of scholars. Thomas Surgue’s Sweet Land of Liberty provides the framework for understanding northern segregation as a system put in the local government rather than state law. Sugrue argues that racism in the north was “structural, institutional, and spatial rather than legal, shaped through housing markets, zoning decisions, and administrative policy. His work shows that northern cities constructed segregation through networks of bureaucratic authority that were hard to challenge. Sugrue’s analysis supports the papers argument by demonstrating that segregation in Chicago was not accidental but maintained through everyday decisions.

Philip T.K. Daniel’s scholarship deepens this analysis of Chicago by showing how school officials resisted desegregation both before and after Brown v. Board. In his work A History of the Segregation-Discrimination Dilemma: The Chicago Experience, Daniel shows that Chicago public school leaders manipulated attendance boundaries, ignored overcrowding schools, and defended “neighborhood schools” as the way to preserve racial separation. Daniel highlights that “in the years since 1954 Brown v. Board of Education decision, research have repeatedly noted that all black schools are regarded inferior.”[2] Underscoring the continuing of inequality despite federal mandates. Daniel’s findings reinforce these papers claim that Chicago’s system was made intentional, and the local officials played a high role in maintaining segregation.

Dionne Danns offers a different perspective by examining how students, parents, and community activists responded to the Chicago public school’s discriminatory practices. In Crossing Segregated Boundaries, her study of Chicago’s High School Students Movement, Danns argues that local activism was essential to expose segregation that officials tied to hide. She shows that black youth did not just fix inequalities of their schools but also developed campaigns, boycotts, sit-ins, which challenged Chicago Public School officials and reshaped the politics of education. Danns’ work supports the middle portion of this paper, it analyzes how community resistance forced Chicago’s segregation practices in a public view.

Paul Dimond’s Beyond Busing highlights how the court system struggled to confront segregation in northern cities because it did not connect with the law. Dimond argues that Chicago officials used zoning, optional areas, intact busing, and boundaries to maintain separation while avoiding the law. He highlights that, “the constant thread in the boards school operation was segregation, not neighborhood,”[3] showing that geographic justification was often a barrier for racial intent. Dimond’s analysis strengthens the argument that Chicago’s system was coordinated and on purpose, built through “normal” administrative decisions.

Jim Carl expands the scholarship into the time of Harold Washington, showing how political leadership shaped the educational reform. Carl argues that Washington believed in improving black schools not through desegregation but through resource equity and economic opportunities for black students. This perspective highlights how entrenched the early segregation policies were, reformers like Washington built a system that was made to disadvantage black communities. While Carl’s focus is later in the Papers period, his work provides the importance of how political structure preserved segregation for decades.

Chicago’s experience with segregation was both typical and different among the northern cities. Cities like Detroit, Philadelphia, and New York faced similar challenges. Chicago’s political machine created these challenges. As Danns explains in “Northern Desegregation: A Tale of Two Cities”, “Chicago was the earliest northern city to face Title VI complaint. Handling the complaint, and the political fallout that followed, left the HEW in a precarious situation. The Chicago debacle both showed HEW enforcement in the North and West and the HEW investigating smaller northern districts.”[4]  This shows how much political interest molded the cities’ approach to desegregation, and how federal authorities had a hard time holding the local systems responsible. The issue between the local power and federal power highlighted a broader national struggle for civil rights in the north, and a reminder that racial inequality was not only in one region but in the entire country. Chicago’s challenge highlights the issues of producing desegregation in areas where segregation was less by the law, and more by policies and politics.

Local policy and zoning decisions made segregation rise even more. In Beyond Busing, Paul R. Dimond says, “To relieve overcrowding in a recently annexed area with a racially mixed school to the northeast, the Board first built a school in a white part and then rejected the superintendent’s integrated zoning proposal to open new schools…. the constant thread in the Board’s school operations was segregation, not neighborhood.”3 These decisions show policy manipulation, rather than the illegal measures that maintained separation.

Dimond further emphasizes the pattern: “throughout the entire history of the school system, the proof revealed numerous manipulations and deviations from ‘normal’ geographic zoning criteria in residential ‘fringes’ and ‘pockets,’ including optional zones, discontinuous attendance areas, intact busing, other gerrymandering and school capacity targeted to house only one race; this proof raised the inference that the board chose ‘normal’ geographic zoning criteria in the large one-race areas of the city to reach the same segregated result.”3  These adjustments were hard but effective in strengthening segregation by making sure even when schools were open, the location, and resource issuing meant that black students and white students would have different education environments. The school board’s actions show a bigger strategy for protecting the status quo under the “neighborhood” schools and making it understandable that segregation was not an accident but a policy.

On the other hand, Carl highlights the policy solutions that are considered for promoting integration, other programs which attract a multiracial, mixed-income student body. Redraw district lines and place new schools to maximize integration… busing does not seem to be an issue in Chicago…it should be obviously metro wide, because the school system is 75 percent minority.” [5]. This approach shows the importance of system solutions that go beyond busing, and integration requires addressing the issue of racial segregation in schools. Carl’s argument suggests that busing itself created a lasting change. By changing district lines, it is not just about moving the children around, but to change the issues that reinforce segregation.

Understanding Chicago’s segregation requires comparing northern and southern practices. Unlike the south, where segregation was organized in law, northern segregation was de facto maintained through residential patterns, local policies, and bureaucratic practices. Sugrue explains, “in the south, racial segregation before Brown was not fundamentally intertwined with residential segregation.”1. This shows how urban geography and housing discrimination shaped educational inequality in northern cities. In Chicago, racial restrictive, reddling, confined black families to specific neighborhoods, and that decided which school the children could attend. This allowed northern officials to say that segregation was needed more than as a policy.

Southern districts did not rely on geographic attendance zones to enforce separation; “southern districts did not use geographic attendance zones to separate black and whites.”1. In contrast, northern cities like Chicago used zones and local governance to achieve smaller results. Danns notes, “while legal restrictions in the south led to complete segregation of races in schools, in many instances the north represented de facto segregation, which was carried out as a result of practice often leading to similar results”4. This highlights the different methods by segregation across regions, even after the legal mandates for integration. In the south, segregation was enforced by the law, making the racial boundaries clear and intentional.

Still, advocacy groups were aware of the nationwide nature of this struggle. In a newspaper called “Key West Citizen” it says, “a stepped-up drive for greater racial integration in public schools, North and South is being prepared by “negro” groups in cities throughout the country.”  Resistance for integration could take extreme measures, including black children to travel long distances to go to segregated schools, while allowing white children to avoid those schools. In the newspaper “Robin Eagle” it notes, “colored children forced from the school they had previously attended and required to travel two miles to a segregated school…white children permitted to avoid attendance at the colored school on the premise that they have never been enrolled there.” [6] These examples show how resistance to integration represents a national pattern of inequality. Even though activist and civil rights groups fought for the educational justice, the local officials and white communities found ways to keep racial segregation. For black families, this meant their children were affected by physical and emotional burdens of segregation like, long commutes, bad facilities, and reminder of discrimination. On the other hand, white students received help from more funding and better-found schools. These differences show how racial inequality was within American education, as both northern and southern cities and their systems worked in several ways.

Understanding Chicago’s segregation requires comparing northern and southern practices. Unlike the south, where segregation was organized in law, northern segregation was de facto maintained through residential patterns, local policies, and bureaucratic practices. Sugrue explains, “in the South, racial segregation before Brown was not fundamentally intertwined with residential segregation.”1. This shows how urban geography and housing discrimination shaped educational inequality in northern cities. In Chicago, racial restrictive, reddling, confined black families to specific neighborhoods, and that decided which school the children could attend. This allowed northern officials to say that segregation was needed more than as a policy.

Southern districts did not rely on geographic attendance zones to enforce separation; “southern districts did not use geographic attendance zones to separate black and whites.”1 In contrast, northern cities like Chicago used zone and local governance to achieve smaller results. Danns notes, “while legal restrictions in the south led to complete segregation of races in schools, in many instances the north represented de facto segregation, which was carries out as a result of practice often leading to similar results”.4 This highlights the different methods by segregation across regions, even after the legal mandates for integration. In the South, segregation was enforced by the law, making the racial boundaries clear and intentional.

Yet the advocacy groups were aware of the nationwide nature of this struggle. In a newspaper called “Key West Citizen” it says, “a stepped-up drive for greater racial integration in public schools, North and South is being prepared by “negro” groups in cities throughout the country.” Resistance for integration could take extreme measure, including black children to travel long distances to go to segregated schools, while allowing white children to avoid those schools. These examples show how resistance to integration represents a national pattern of inequality. Even though activist and civil rights groups fought for educational justice, the local officials and white communities found ways to keep racial segregation. For black families, this meant their children were affected by physical and emotion burdens of segregation like, long commutes, bad facilities, and reminder of discrimination. On the other hand, white students received help from more funding and better-found schools. These differences show how racial inequality was within American education, as both northern and southern cities and their systems worked in several ways.

The policies that shaped Chicago schools in the 1950’s and 1960’s cannot be understood without looking at key figures such as Benjamin Willis and Harold Washington. Benjamin Willis, who was a superintendent of Chicago Public Schools from 1953 to 1966 and became known for his resistance to integration efforts. Willis’ administration relied on the construction of mobile classrooms, also known as “Willis wagons,” to deal with the overcrowding of Black schools. Other than reassigning students to nearby under-enrolled schools, Willis placed these classrooms in the yards of segregated schools. As Danns explains, Willis was seen by Chicagoans as the symbol of segregation as he gerrymandered school boundaries and used mobile classrooms (labeled Willis Wagons) to avoid desegregation.”4  . His refusal to implement desegregation measures made him a target of protest, including boycotts led by families and students.

On the other hand, Harold Washington, who would become Chicago’s first black mayor, represented a shift towards community-based reform and equality-based policies. Washington believed that equality in education required more than racial integration, but it needed structural investment in Black schools and economic opportunities for Black students. Jim Carl writes, Washington’s approach, “Washington would develop over the next thirty-three years, one that insisted on adequate resources for Black schools and economic opportunities for Black students rather than viewing school desegregation as the primary vehicle for educational improvement.”5 His leadership came from the earlier civil rights struggles of the 1950’s and 1960’s with the justice movements that came in the post-civil rights era.

Chicago’s experience in the mid-twentieth century provides an example of how racial segregation was maintained through policy then law.  In the postwar era, there was an increase in Chicago’s population. Daniel writes, “this increased the black school population in that period by 196 percent.”4. By the 1950’s, the Second Great Migration influenced these trends, with thousands of Black families arriving from the south every year. As Sugrue notes, “Blacks who migrated Northern held high expectations about education.” 1.   There was hope the northern schools would offer opportunities unavailable in the South. Chicago’s public schools soon became the site of racial conflict as overcrowding; limited resources, and administrative discrimination showed the limits of those expectations.

One of the features of Chicago’s educational system is the era of the “neighborhood schools” policy. On paper, this policy allowed students to attend schools near their homes, influencing the community. In practice, it was a powerful policy for preserving racial segregation. Sugrue explains, “in densely populated cities, schools often within a few blocks of one another, meaning that several schools might serve as “neighborhood”.”1. Because housing in Chicago was strictly segregated through redlining, racially restrictive areas, and de facto residential exclusion, neighborhood-based zoning meant that Black and white students were put into separate schools. This system allowed city officials to claim that segregation reflected residential patterns rather than intentional and avoiding the violation of Brown. A 1960 New York Times article called, “Fight on Floor now ruled out” by Anthony Lewis, revealed how Chicago officials publicly dismissed accusations of segregation while internally sustaining the practice. The article reported that school leaders insisted that racial imbalance merely reflected “neighborhood conditions” and that CPS policies were “not designed to separate the races,” even as Black schools operated far beyond capacity.”[7] This federal-level visibility shows that Chicago’s segregation was deliberate: officials framed their decisions as demographic realities, even though they consistently rejected integration measures that would have eased overcrowding in Black schools.

The consequences of these policies became visible by the 1960’s. Schools in Black neighorhoods were overcrowded, operating on double shifts or in temporary facilities. As Dionne Danns describes in Northern Desegregation: A Tale of Two Cities, she says, “before school desegregation, residential segregation, along with Chicago Public School (CPS) leaders’ administrative decisions to maintain neighbor-hood schools and avoid desegregation, led to segregated schools. Many Black segregated schools were historically under-resourced and overcrowded and had higher teacher turnover rates.”[8] The nearby white schools had empty classrooms and more modern facilities. This inequality sparked widespread community outrage, setting up the part for the educational protest that would define Chicago’s civil rights movement.

The roots of Chicago’s school segregation related to its housing policies. Redlining, the practice by which federal agencies and banks denied loans to Black homebuyers and systematically combined Black families to certain areas of the city’s south and west sides. These neighborhoods were often shown by housing stock, limited public investment, and overcrowding. Due to this policy, school attendance zones were aligned with neighborhood boundaries, these patterns of residential segregation were mirrored with the city’s schools. As historian Matthew Delmont explains in his book, Why Busing Failed, this dynamic drew the attention of federal authorities: “On July 4, 1965, after months of school protest and boycotts,  civil rights groups advocated in Chicago by filing a complaint with the U.S. Office of Education charging that Chicago’s Board of Education violated Title VI of the Civil Rights Act of 1964.”[9] This reflected how much intertwined housing and education policies were factors of racial segregation. The connection between where families could live and where their children could attend school showed how racial inequality was brought through everyday administrative decisions, and molding opportunities for generations of black Chicagoans.

These systems, housing, zoning, and education helped maintain a racial hierarchy under local control. Even after federal courts and civil rights organizations pushed for compliance with Brown, Chicago’s officials argued that their schools reflect demographic reality rather than discriminatory intent. This argument shows how city planners, developers, and school administrators collaborated. School segregation was not a shift from southern style Jim Crow, but a defining feature of North governance.

Chicago’s struggle with school segregation was not submissive. Legal challenges and community activism were tools in confronting inequalities. The NAACP Legal Defense Fund filed many lawsuits to challenge these policies and targeted the districts that violated the state’s education law. Parents and students organized boycotts and protests and wanted to draw attention to the injustices. Sugrue notes, “the stories of northern school boycotts are largely forgotten. Grassroots boycotts, led largely by mothers, inspired activists around the country to demand equal education”1.  The boycotts were not symbolic but strategic; community driven actions targeted at the system’s resistance to change. These movements represented an assertion of power from communities that had to be quiet by discriminatory policies. Parents, especially black mothers, soon became figures in these campaigns, using their voices, and organizing ways to demand responsibility from school boards and city officials. Their actions represented the change that would not come straight from the courtrooms, but from the people affected by injustice. The boycotts interrupted the normal school system and forced officials to listen to the demands for equal education. 

Danns emphasizes the range of activism during this period, writing in Chicago High School Students’ Movement for Quality Public Education: “in the early 1960’s, local and prominent civil rights organizations led a series of protests for school desegregation. These efforts included failed court cases, school boycotts, and sit-ins during superintendent Benjamin Willis administration, all which led to negligible school desegregation”[10]. Despite the limited success of these efforts, the activism of the 1960’s was important for exposing the morals of northern liberalism, and the continuing of racial inequalities outside the South. Student-led protests and communities organizing, not only challenged the policies of the Chicago Board of Education but also influenced the new generation for young people to see education as a main factor in the struggle for civil rights.

Legal tactics were critical in enforcing agreements. An article from the NAACP Evening Star writes, “on the basis of an Illinois statute which states that state-aid funds may be withheld from any school district that segregated based on race or color.” [11]The withholding of state funds applied pressure on resistant boards, showing that legal leverage could have consequences. When the board attempted to deny black students’ admission, the NAACP intervened.  In the newspaper “Evening Star”, They reported, “Although the board verbally refused to admit negro students and actually refused to do so when Illinois students applied for admission, when the board realized that the NAACP was going to file suit to withhold state-aid funds, word was sent to each student who had applied that they should report to morning classes.” [12]This shows how legal and financial pressure became one of the effective ways for enforcing desegregation. The threat of losing funds forced the school boards to work with the integration orders, highlighting the appeals were inadequate to undo the system of discrimination. The NAACP’s strategy displayed the importance of defense with legal enforcement, using the courts and states’ statutes to hold them accountable. This illustrated that the fight for educational equality required not only the protest, but also the legal base to secure that justice was to happen. This collaboration of legal action and grassroots mobilization reflects the strategy that raised both formal institutions and community power, showing the northern resistance to desegregation was far from being unchanged.

Chicago’s segregated schools had long-lasting effects on Black students, particularly through inequalities in the education system. Schools in Black neighborhoods were often overcrowded, underfunded, and provided fewer academic resources than their white counterparts. These disparities limited educational opportunities and shaped students’ futures. The lack of funding meant that schools could no longer afford placement courses, extracurricular programs, or even resources for classrooms, this shaped a gap in the quality of education between and black and white students. Black students in these kinds of environments were faced with educational disadvantages, but also less hope on their future.

Desegregation advocates sought to address both inequality and social integration. Danns explains, “Advocates of school desegregation looked to create integration by putting students of different races into the same schools. The larger goal was an end to inequality, but a by-product was that students would overcome their stereotypical ideas of one another, learn to see each other beyond race, and even create interracial friendships”4. While the ideal of desegregation included fostering social understanding, the reality of segregated neighborhoods and schools often hindered these outcomes. Even when legal policies aimed to desegregate schools, social and economic blockades continued to bring separation. Many white families moved to suburban districts to avoid integration. This created more classrooms to be racially diverse and left many of the urban schools attended by students of color.

The larger society influenced students’ experiences inside schools, despite efforts to create inclusive educational spaces. Danns explains, “In many ways, these schools were affected by the larger society; and tried as they might. Students often found it difficult to leave their individual, parental, or community views outside the school doors”9 Even when students developed friendships across racial and ethnic lines, segregated boundaries persisted: “Segregated boundaries remained in place even if individuals had made friends with people of other racial and ethnic groups”4. The ongoing influence of social norms and expectations meant that schools were not blinded by the racial tensions that existed outside their walls. While the teachers and administration may have tried to bring a more integrated environment, the racial hierarchies and prejudices in the community often influenced the students’ interactions. These hurdles were not always visible, but they shaped the actions within the school in fine ways. Despite the efforts at inclusion, the societal context of segregation remained challenging, and limited the integration and equality of education.

Beyond the social barriers, the practical issue of overcrowding continued to affect education. Carl highlights this concern, quoting Washington: “In interest, Washington stated that the issue ‘is not “busing,” it is freedom of choice. Parents must be allowed to move their children from overcrowded classrooms. The real issue is quality education for all’5. The focus on “freedom of choice” underscores that structural inequities, rather than simple policy failures, were central to the ongoing disparities in Chicago’s schools.

Overcrowding in urban schools was a deeper root to inequality. Black neighborhoods were often left with underfunded and overcrowded schools, while the white schools had smaller classes, and more resources. The expression of “freedom of choice” was meant to show that parents in marginalized communities should all have the same educational opportunity as the wealthier neighborhoods. However, this freedom was limited by residential segregation, unequal funding, and barriers that restricted many within the public school system.

The long-term impact of segregation extended beyond academics into the social and psychological lives of Black students. Segregation reinforced systemic racism and social divisions, contributing to limited upward mobility, economic inequality, and mistrust of institutions. Beyond the classroom, these affects shaped how the black students viewed themselves and where they stand in society. Psychologically, this often resulted in lower self-esteem and no academic motivation. Socially, segregation limited interactions between the different racial groups, and formed stereotypes. Overtime, these experiences came from a cycle in the issue of educational and government institutions, as black communities struggled with inequalities continuously.

  Black students were unprepared for the realities beyond their segregated neighborhoods, “Some Black participants faced a rude awakening about the world outside their high schools. Their false sense of security was quickly disrupted in the isolated college towns they moved to, where they met students who had never had access to the diversity they took for granted”9. This contrast between the relative diversity within segregated urban schools and the other environments illustrates how deeply segregation shaped expectations, socialization, and identity formation.

Even after desegregation policies were implemented, disparities persisted in access to quality education. Danns observes that, decades later, access to elite schools remained unequal: “After desegregation ended, the media paid attention to the decreasing spots available at the city’s top schools for Black and Latino students. In 2018, though Whites were only 10 percent of the Chicago Public Schools population, they had acquired 23 percent of the premium spots at the top city schools”7. This statistic underscores the enduring structural and systemic inequalities in the educational system. These inequalities show how racial privilege and access to resources favored by certain groups and disadvantaged others. Segregation has taken new ways, through economic and residential patterns rather than laws. This highlights the policy limitations, and brings out the need for more social, economic, and institutional change to achieve the goal of educational equality.

Segregation not only restricted access to academic resources but also had broader psychological consequences. By systematically limiting opportunities and reinforcing racial hierarchies, segregated schooling contributed to feelings of marginalization and diminished trust in public institutions. The experience of navigating a segregated school system often left Black students negotiating between a sense of pride in their communities and the constraints imposed by discriminatory policies. The lasting effects of these psychological scars were there long after segregation ended. The pain from decades of separation made it hard for many black families to believe in change that brought equality. Segregation was not an organized injustice, but also an emotional one; shaping how generations of students understood their worth, and connection to a system that let them down before.

The structural and social consequences of segregation were deeply intertwined. Overcrowded and underfunded schools have diminished educational outcomes, which in turn limit economic and social mobility. Social and psychological barriers reinforced these disparities, creating a cycle that affected multiple generations. Yet the activism, legal challenges, and community efforts described earlier demonstrate that Black families actively resisted these constraints, fighting for opportunities and equality. Their fight not only challenged the system’s injustice, but also laid a foundation for more civil rights reforms, and influencing future movements.

By examining Chicago’s segregation in the context of broader northern and national trends, it becomes clear that local policies and governance played an outsized role in shaping Black students’ experiences. While southern segregation was often codified in law, northern segregation relied on policy, zoning, and administrative practices to achieve similar results. The long-term impact on Chicago’s Black communities reflects the consequences of these forms of institutionalized racism, emphasizing the importance of both historical understanding and ongoing policy reform.

Chicago’s school segregation was not accidental or demographic, it was a product of housing, political and administrative decisions designed to preserve racial separation. The city’s leaders made a system that mirrored the thinking behind Jim Crow Laws and its legal framework, making northern segregation more challenging to see. Through policies made in bureaucratic language, Chicago Public Schools and city officials made sure that children got unequal education for decades.

The legacy of Chicago’s segregation exposes the character of educational inequality. Although activists, parents, and students fought to expose the challenges and the discrimination they created in the mid-twentieth century to continue to shape educational output today. Understanding the intentional design behind Chicago’s segregation is essential to understanding the persistence racial inequalities that defines American schooling. It is also a call to action reformers today to confront the historical and structural forces that have made these disparities. The fight for equitable education is not just about addressing the present-day inequalities but also dismantling the policies and systems that were built with the purpose of maintaining racial separation. The struggle for equality in education remains unfinished, and by acknowledging the choices that lead to the situation can be broken down by structures that continue to limit opportunities for future generations.

Evening Star. (Washington, DC), Oct. 23, 1963. https://www.loc.gov/item/sn83045462/1963-10-23/ed-1/.

Evening Star. (Washington, DC), Oct. 22, 1963. https://www.loc.gov/item/sn83045462/1963-10-22/ed-1/.

Evening Star. (Washington, DC), Sep. 8, 1962. https://www.loc.gov/item/sn83045462/1962-09-08/ed-1/.

Naacp Legal Defense and Educational Fund. NAACP Legal Defense and Educational Fund Records: Subject File, -1968; Schools; and States; Illinois; School desegregation reports, 1952 to 1956, undated. – 1956, 1952. Manuscript/Mixed Material. https://www.loc.gov/item/mss6557001591/.

The Robbins eagle. (Robbins, IL), Sep. 10, 1960. https://www.loc.gov/item/sn2008060212/1960-09-10/ed-1/.

The Key West citizen. (Key West, FL), Jul. 9, 1963. https://www.loc.gov/item/sn83016244/1963-07-09/ed-1/.

Carl, Jim. “Harold Washington and Chicago’s Schools between Civil Rights and the Decline of the New Deal Consensus, 1955-1987.” History of Education Quarterly 41, no. 3 (2001): 311–43. http://www.jstor.org/stable/369199.

Dionne Danns. 2020. Crossing Segregated Boundaries: Remembering Chicago School Desegregation. New Brunswick, New Jersey: Rutgers University Press. https://research.ebsco.com/linkprocessor/plink?id=a82738b5-aa61-339b-aa8a-3251c243ea76.

Danns, Dionne. “Chicago High School Students’ Movement for Quality Public Education, 1966-1971.” The Journal of African American History 88, no. 2 (2003): 138–50. https://doi.org/10.2307/3559062.

Danns, Dionne. “Northern Desegregation: A Tale of Two Cities.” History of Education Quarterly 51, no. 1 (2011): 77–104. http://www.jstor.org/stable/25799376.

Matthew F. Delmont; Why Busing Failed: Race, Media, and the National Resistance to School Desegregation

Philip T. K. Daniel. “A History of the Segregation-Discrimination Dilemma: The Chicago Experience.” Phylon (1960-) 41, no. 2 (1980): 126–36. https://doi.org/10.2307/274966.

Philip T. K. Daniel. “A History of Discrimination against Black Students in Chicago Secondary Schools.” History of Education Quarterly 20, no. 2 (1980): 147–62. https://doi.org/10.2307/367909.

Paul R. Dimond. 2005. Beyond Busing: Reflections on Urban Segregation, the Courts, and Equal Opportunity. [Pok. ed.]. Ann Arbor: University of Michigan Press. https://research.ebsco.com/linkprocessor/plink?id=76925a4a-743d-3059-9192-179013cceb31.

Thomas J. Sugrue. Sweet Land of Liberty: The Forgotten struggle for Civil Right in the North. Random House: NY.


[1] Thomas J. Sugrue, Sweet Land of Liberty: The Forgotten Struggle for Civil Rights in the North (New York: Random House, 2008),

[2] Philip T. K. Daniel, “A History of the Segregation-Discrimination Dilemma: The Chicago Experience,” Phylon 41, no. 2 (1980): 126–36.

[3]Paul R. Dimond, Beyond Busing: Reflections on Urban Segregation, the Courts, and Equal Opportunity (Ann Arbor: University of Michigan Press, 2005)

  1. [4]Dionne Danns, Crossing Segregated Boundaries: Remembering Chicago School Desegregation (New Brunswick, NJ: Rutgers University Press, 2020)

[5] Jim Carl, “Harold Washington and Chicago’s Schools between Civil Rights and the Decline of the New Deal Consensus, 1955–1987,” History of Education Quarterly 41, no. 3 (2001): 311–43.

[6] The Robbins Eagle (Robbins, IL), September 10, 1960,

[7]   The New York Times, “Fight on the Floor Ruled out,” July 27, 1960, 1.

[8] Dionne Danns, “Northern Desegregation: A Tale of Two Cities,” History of Education Quarterly 51, no. 1 (2011): 77–104.

[9] Matthew F. Delmont, Why Busing Failed: Race, Media, and the National Resistance to School Desegregation (Cambridge, MA: Harvard University Press, 2016).

[10] Dionne Danns, “Chicago High School Students’ Movement for Quality Public Education, 1966–1971,” Journal of African American History 88, no. 2 (2003): 138–50.

[11] NAACP Legal Defense and Educational Fund, Subject File: Schools; States; Illinois; School Desegregation Reports, 1952–1956, Manuscript Division, Library of Congress,

[12] Evening Star (Washington, DC), September 8, 1962,

Camden’s Public Schools and the Making of an Urban “Lost Cause”

In modern-day America, there is perhaps no city quite as infamous as Camden, New Jersey. A relatively-small urban community situated along the banks of the Delaware River, directly across from the sprawling, densely-populated urban metropolis of Philadelphia, in any other world, Camden would likely be a niche community, familiar only to those in the immediate surrounding area. However, the story of Camden is perhaps one of the greatest instances of institutional collapse and urban failure in modern America, akin to the catastrophes that befell communities such as Detroit, Michigan and Newark, New Jersey throughout the mid-twentieth century.

Once an industrial juggernaut, housing powerful manufacturing corporations such as RCA Victory and the New York Shipbuilding Corporation, Camden was perhaps one of the urban communities most integral to the American war effort and eventual victory in the Pacific Theatre in World War II. However, in the immediate aftermath of the war, Camden experienced significant decline, its once-prosperous urban hub giving way to a landscape of disinvestment, depopulation, and despair. By the late twentieth century  – specifically the 1980s and 1990s – Camden had devolved into a community wracked by poverty, crime, and drug abuse, bearing the notorious label “Murder City, U.S.A.” – a moniker which characterized decades of systemic inequity and institutional discrimination as a fatalistic narrative, presenting Camden as a city beyond saving, destined for failure. However, Camden’s decline was neither natural nor inevitable but rather, was carefully engineered through public policy. Through a calculated and carefully-measured process of institutional segregation and racial exclusion, state and city lawmakers took advantage of Camden’s failing economy and evaporating job market to confine communities of color to deteriorating neighborhoods, effectively denying them access to the educational and economic opportunities that had been afforded to white suburbanites in the surrounding area.

This paper focuses chiefly on Camden’s educational decline and inequities, situating the former within a broader historical examination of postwar urban America. Utilizing the historiographical frameworks of Arnold Hirsch, Richard Rothstein, Thomas Sugrue, and Howard Gillette, this research seeks to interrogate and illustrate how segregation and suburbanization functioned as reinforcements of racial inequity, and how such disenfranchisement created the perfect storm of educational failure in Camden’s public school network. The work of these scholars demonstrates that Camden’s neighborhoods, communities, and schools were intentionally structured to contain, isolate, and devalue communities and children of color, and that these trends were not unintended byproducts of natural spatial migration nor economic development. Within this context, it is clear that public education in the city of Camden did not simply mirror urban segregation, but rather institutionalized it as schools became both a reflection and reproduction of the city’s racial geography, working to entrench the divisions drawn by policymakers and real estate developers into a pervasive force present in all facets of life and human existence in Camden.

In examining the influence of Camden’s segregation on public education, this study argues that the decline of the city’s school system was not merely a byproduct, but an engine of institutional urban collapse. The racialized inequitable geography of public schooling in Camden began first as a willful and intentional byproduct of institutional disenfranchisement and administrative neglect, but quickly transformed into a self-fulfilling prophecy of failure, as crumbling school buildings and curricular inequalities became manifestations of policy-driven failure, and narratives of students of color as “inferior” were internalized by children throughout the city. Media portrayals of the city’s school system and its youth, meanwhile, transformed these failures into moral statements and narratives, depicting Camden’s children and their learning communities as symbols of inevitable dysfunction rather than victims of institutional exclusion. Thus, Camden’s transformation into the so-called “Murder Capital of America” was inseparable from the exclusionary condition of the city’s public schools, as they not only bore witness to segregation, but also became its most visible proof and worked to inform fatalistic narratives of the city and moral character of its residents.

            Historians of postwar America have long since established an understanding of racial and socioeconomic as essential to the development of the modern American urban and suburban landscape, manufactured and carefully reinforced throughout the twentieth century by the nation’s political and socioeconomic elite. Foundational studies include Arnold Hirsch’s “Making the Second Ghetto: Race and Housing in Chicago” (1983) and Richard Rothstein’s 1977 text, The Color of Law: A Forgotten History of How Our Government Segregated America serve to reinforce such traditional understandings of postwar urban redevelopment and suburban growth, situating the latter as the direct result of institutional policy, rather than mere byproducts and results of happenstance migration patterns.[1] In The Color of Law, Rothstein explores the role of federal and state political institutions in the codification of segregation through intergenerational policies of redlining, mortgage restrictions, and exclusionary patterns in the extension of mortgage insurance to homeowners along racial lines. In particular, Rothstein focuses on the Federal Housing Administration’s creation of redlining maps, which designated majority Black and Hispanic neighborhoods as high-risk “red zones,” effectively denying residents from these communities home loans, thus intentionally erecting barriers to intergenerational wealth accumulation through homeownership in suburban communities such as Levittown, Pennsylvania.[2]

            Hirsch’s “The Making of the Second Ghetto” echoes this narrative of urban segregation as manufactured, primarily through the framework of his “second ghetto” thesis. Conducting a careful case study of Chicago through this framework, Hirsch argues that local municipalities, urban developers/planners, and the business elite of Chicago worked in tandem to enact policies of “domestic containment,” wherein public housing projects were weaponized against Black and Hispanic communities to reinforce racial segregation throughout the city. Utilizing public housing as an anchor rather than tool of mobility, Chicago’s socioeconomic and political elite effectively conspired at the institutional level with one another to confine Black Chicagoans to closely-regulated low-income communities, devaluing land and property values in these areas whilst zoning more desirable land for redevelopment and suburban growth, thereby manually raising housing and movement costs to a level that Black Americans were simply unable to afford due to the aforementioned devaluation of their own communities as well as generational barriers to wealth accumulation.[3] Chris Rasmussen’s “Creating Segregation in an Era of Integration” applies such narratives to a close investigation of New Brunswick, New Jersey, particularly in regards to educational segregation, investigating how city authorities utilized similar institutional frameworks of racial separation to confine students to segregated schools and resist integration (school zoning, prioritization of white communities and schools for development, and segregationist housing placements), working off of the existing community segregation detailed by the work of Rothstein and Hirsch. [4]

            Working in tandem with historical perspectives of segregation as integral to the development of suburban America and subsequent urban decline, historians have also identified disinvestment as a critical economic process integral to the exacerbation of urban inequality, and eventual decay. Beginning in the postwar era, specifically in the aftermath of World War II and suburban development, industrial urban communities faced significant shortages in employment in the manufacturing sectors, as corporations began to outsource their labor to overseas and suburban communities, often following the migration of white suburbanites. Robert Beauregard’s Voices of Decline: The Post-War Fate of U.S. Cities diverges from the perspectives of Hirsch and Rothstein, citing declining employment opportunities and urban disinvestment as the most important factor in the decline of urban America on a national scale. Beauregard argues that by framing the disinvestment of urban wartime industrial juggernauts such as Newark, Camden, and Detroit as an “inevitability” in the face of rapid deurbanization and the growth of suburban America, policymakers at the national and local levels portrayed urban decline as a natural process, as opposed to a deliberate conspiracy to strip employment opportunities and the accumulation of capital from urban communities of color, even before suburbanization began to occur on a large scale.[5] Thomas Sugrue’s Origins of the Urban Crisis: Race and Inequality in Postwar Detroit also adheres to this perspective, situating economic devastation in the context of the development of racially-exclusive suburban communities, thereby working to tie existing scholarship and the multiple perspectives expressed here together, crafting a comprehensive narrative of urban decline in mid-twentieth century America as recurrent in nature, a cycle of unemployment, abject poverty, and a lack of opportunity that was reinforced by public policy and social programs that in theory, were supposed to alleviate such burdens.[6]

            Ultimately, while these sources focus on differing aspects of urban decline, they all work in tandem with one another to allow for a greater, comprehensive portrait of the causes of urban decay in postwar America, throughout the twentieth century. From deindustrialization to segregation and its influence on disparities in education, these sources provide absolutely essential context for an in-depth examination of the specific case study of Camden, New Jersey both in regards to the city itself, but also its public education system. While these sources may not all cite the specific example of Camden, the themes and trends identified each ring true and featured prominently in the story of Camden throughout this period.

            However, this paper will function as a significant divergence from such pre-existing literature, positioning the failure of public education in Camden as a key factor in the city’s decline, rather than a mere byproduct. A common trend present in much of the scholarship discussed above is that educational failure is examined not as a contributing root to Camden’s decline (and certainly not an important one, when education is briefly discussed in this context), but rather as a visible, tangible marker of urban decay in the area. While this paper does not deny the fact that failures in education are certainly rooted in fundamental inequity in urban spaces and broader social failings, it instead seeks to position Camden’s failing education state as not only a result of  urban decline, but as a contributor – specifically by engaging in a discussion of how educational failure transformed narratives around Camden as a failed urban community, beyond help and destined for ruin. In doing so, this paper advances a distinct argument: that Camden’s educational collapse must be understood not merely as evidence of urban decline, but as a foundational force that actively shaped—and in many ways intensified—the narrative of Camden as a city fated for failure.

Prior to launching into an exploration of Camden’s public schooling collapse and the influence of such failures of institutional education on the city’s reputation and image, it is important to first establish a clear understanding of the context of such shortcomings.  Due to this paper’s focus specifically on the institutional failure of Camden’s public schooling system, and how such failures shaped perceptions around the city as an urban lost cause, this section will focus primarily on rising rates of racial segregation in the mid-twentieth century, both within city limits and beyond, specifically in regards to Camden County’s sprawling network of suburban communities. While the factors of deindustrialization, economic failure, and governmental neglect absolutely do factor into the creation of an urban environment situated against educational success, racial segregation was chiefly responsible for the extreme disparities found in educational outcomes through the greater Camden region, and is most relevant to this paper’s discussion of racialized narratives of inevitable urban failure that proved to be so pervasive on a national scale regarding Camden, both within the mid-to-late twentieth century and into the present day.

Such trends date back to massive demographic transitions of the pre–World War II era was the Great Migration – the mass movement of Black Americans to northern industrial cities. Drawn by the promise of stable employment and the prospect of greater freedom and equality than was available in the Jim Crow South, millions of migrants relocated to urban centers along the Northeastern seaboard. Camden, New Jersey, was among these destinations, attracting a growing Black population throughout the early twentieth century due to its concentration of manufacturing giants such as RCA Victor, the New York Shipbuilding Corporation, and Campbell’s Soup.[7] With the outbreak of war in Europe in 1939—and especially following the United States’ entry into World War II after Pearl Harbor—industrial production in Camden surged. The city soon emerged as a vital hub of wartime manufacturing and domestic production, cementing its status as a key center of American industrial might.

As a direct result of its industrial growth and expanding wartime economy, Camden continued to attract both Black Americans and new immigrant populations, many of whom were of Latino descent. Among these groups were large numbers of Stateside Puerto Ricans, continuing a trend of immigration dating back to the 1917 extension of U.S. citizenship to Puerto Ricans.[8] Motivated by many of the same factors as Black migrants—chiefly the pursuit of steady employment and improved living conditions—these communities helped shape Camden into a diverse and vibrant urban center. The city’s population of color expanded rapidly during this period, its growth driven by wartime prosperity and the allure of industrial opportunity.

Following American victory in the Pacific and the end of World War II, Camden continued to experience rapid economic growth, although tensions arose between the city’s residents during this period along racial-ethnic lines. With the common American enemy of Japan and the Nazis firmly removed from the picture, hostilities began to turn inwards, and racial tensions skyrocketed, especially in the dawn of the Civil Rights Movement. As historian Chriss Rasmussen writes in “Creating Segregation in the Era of Integration: School Consolidation and Local Control in New Brunswick, New Jersey, 1965-1976”, “While Brown and the ensuing civil rights movement pointed toward racial integration, suburbanization forestalled racial equality by creating and reinforcing de facto segregation. As many whites moved to the suburbs, blacks and Latinos remained concentrated in New Jersey’s cities.”[9] Thus, as Black Americans increasingly emerged victorious in the fight against racial injustice and began to accumulate more and more rights and legal protections, city-dwelling white Americans grew increasingly fearful and resentful, spurring a mass exodus from urban population centers – including Camden. Drawn by federally backed mortgages, the expansion of highways, and racially exclusive housing policies,[10] white residents moved to neighboring suburbs such as Cherry Hill, Haddonfield, and Pennsauken, while structural barriers effectively excluded Black and Latino residents from the same opportunities. Leaving for the suburbs in droves, white residents fled from Camden, taking significant wealth and capital, as well as major business with them, thus weakening the city’s financial base and leaving workers—particularly people of color—vulnerable to unemployment.[11]

Public and private institutions increasingly withdrew resources from neighborhoods perceived as declining or racially changing and banks engaged in redlining, denying mortgages and loans to residents in nonwhite neighborhoods, while city budgets prioritized the needs of more affluent suburban constituencies over struggling urban areas.[12] Businesses and developers often chose to invest in suburban communities where white families were relocating, rather than in Camden itself, creating a feedback loop of declining property values, eroding tax revenue, and worsening public services. As historian Robert Beauregard writes in Voices of Decline: The Postwar Fate of U.S. Cities, “…while white middle-class and young working-class households had resettled in suburban areas, elderly and minority and other low-income households remained in the central cities. This increased the demand for basic public services (e.g. education) while leaving city governments with taxpayers having lower earnings and less property to tax.”[13] Thus, Camden residents left behind within the confines of the city became increasingly dependent on social welfare programs, which local and state governments began to fund less and less. This combination of economic retrenchment, racialized perceptions of neighborhood “desirability,” and policy-driven neglect fueled a cycle of disinvestment that disproportionately affected communities of color, leaving the city structurally disadvantaged.[14]

Concerns about racial integration in neighborhoods and schools also motivated many families to leave, as they sought communities aligned with their social and economic preferences. Such demographic change was rapid, and by 1950 approximately 23.8 percent of Camden City’s population was nonwhite.[15] While that figure may not seem extreme to the modern American, an individual likely familiar with diverse communities and perspectives, it is particularly shocking when placed in the context of Camden’s surrounding suburbs: by 1950, the nonwhite population of Pennsauken was a mere 4.5 percent,  2.1 percent in Haddonfield, and an even lower 1.9 percent in Cherry Hill.[16] These figures in particular serve as an exemplary demonstration as to the cyclical nature of segregation in the educational sector within the state of New Jersey, contextualizing twentieth century segregation not as a unique occurrence, but rather a continuation of historical patterns. In the nineteenth century, the majority of the state’s schools were segregated along racial lines, and in 1863, New Jersey’s state government directly sanctioned the segregation of public school districts statewide. While such decisions would ultimately be reversed in 1881, active opposition to integration remained into the twentieth century, particularly within elementary and middle school education. For example, a 1954 study found that New Jersey schools, both historically and actively, “…had more in common with states below than above…” the Mason-Dixon line. Most notably however, by 1940, the state had more segregated schools than at any period prior to the passing of explicit anti-segregation legislation in 1881.[17] Thus, it is evident that the state of Camden’s schools in the mid-twentieth century is not an isolated incident, but rather indicative of the cyclical nature of racial separation and disenfranchisement throughout the state of New Jersey in an educational context.

These demographic and economic shifts had profound implications for Camden’s schools, which now served largely Black and Latino student populations. In particular, Blaustein’s work proves particularly valuable in demonstrating the catastrophic impacts of white flight on Camden’s schools, as well as the irreversible harm inflicted on students of color as a result of institutional failures in education. Writing in a 1963 report to then-President John F. Kennedy’s – a cautious supporter of the Civil Rights Movement – Civil Rights Commission, notable civil rights lawyer Albert P. Blaustein establishes a clear portrait of the declining state of Camden’s public schooling system, as well as the everyday issues facing students and educators alike in the classroom. In delivering a scathing report on neighborhood segregation within the city in Camden, as demonstrated by demographic data regarding the race/ethnicity of students enrolled in public education across the Camden metropolitan area, Blaustein writes:

Northeast of Cooper River is the area known as East Camden, an area with a very small Negro population. For the river has served as a barrier against intracity population…Two of the four junior high schools are located here: Davis, which is 4.0 percent Negro and Veterans Memorial which is 0.2 percent Negro. Also located in East Camden are six elementary schools, four of which are all-white and the other two of which have Negro percentages of 1.3 percent and 19.7 percent…Central Camden, on the other hand, is largely Negro. Thus, the high percentage of Negroes in Powell (100.0 percent), Sumner (99.8 percent), Fetters (91.6 percent), Liberty (91.2 percent), and Whittier (99.1 percent), etc.[18]

Based on the data provided here by Blaustein, it is simply impossible to argue that racial segregation did not occur in Camden. Additionally, it becomes quite clear that while much discussion regarding Camden public schools and wide demographic changes in the city as a whole focuses on the movement of white residents to suburban areas, racial segregation and stratification absolutely did occur within the city, thus worsening educational opportunities and learning outcomes for Camden’s students of color even more.

            However, Blaustein does not end his discussion with segregation amongst student bodies, but rather extends his research even further to a close examination of racial/ethnic compositions of school leadership, including teachers, administrators, and school board members, yielding similar results. For example, according to his work, the Fetters School, possessing a student body of 91.6 percent Black students employed nine white teachers and nine Black teachers in 1960, but two white teachers and sixteen Black teachers in 1963. Even more shockingly, Central School, composed of 72.9 percent Black students, employed only white teachers in 1955. By 1963, just nine years later, this number had completely reversed and the school employed all Black educators.[19] Thus, Blaustein’s investigation of variances in Camden public schools’ racial composition reveal that this issue was not simply limited to education nor exclusionary zoning practices, but was rather an insidious demographic trend which had infested all areas of life in Camden, both within education and outside of classrooms. In ensuring that Black students were only taught by Black teachers and white students by white teachers, education in Camden was incredibly nondiverse, eliminating opportunities for cross-racial understanding nor exposure to alternative perspectives, thereby working to keep Black and white communities completely separate not just in the facets of residence and education, but also in interaction and socialization.

            With the existence of racial segregation both within Camden as well as the city’s surrounding area clearly established, we can now move to an exploration of inequalities in public education within the city. Perhaps one of the most visible and apparent markers of inequalities in public education in Camden can be found in school facilities and buildings. The physical conditions in which children of color were schooled were grossly and completely outdated, especially in comparison to the facilities provided to white children, both inside and outside of the city of Camden. For example, as of 1963, there were six specific public schools that had been cited as in dire need of replacement and/or renovation by Camden’s local legislative board, the vast majority of which were located in segregated communities: Liberty School (1856, 91.2% Black student population), Cooper School (1874, 30.7% Black student population), Fetters School (1875, 91.6% Black student population), Central School (1877, 72.9% Black student population), Read School (1887, 32.0% Black student population), and finally, Bergen School (1891, 45.6% Black student population).[20] Of the schools cited above, approximately half of the buildings that had been deemed by the city of Camden as unfit for usage and nonconducive to education were occupied by majority-Black student populations (Liberty, Fetters, and Central), whereas Bergen School was split just short of evenly between Black and white low-income students.

Additionally, it is important to acknowledge that these figures only account for the absolute worst of Camden’s schools, such trends in inadequate school buildings and facilities occurred throughout the city, in accordance with the general quality of infrastructure and housing present in each neighborhood they were located. In other words, while the data above only references a very small sample size of Camden’s schools, the trends reflected here (specifically, in the intentional zoning of Black students to old, run-down schooling facilities) serve as a microcosm of Camden’s public schools, wherein students of color were intentionally confined to older schools and run-down facilities.

  Education researcher Jonathan Kozol expands on the condition of school facilities in Camden’s disenfranchised communities in his widely-influential book, Savage Inequalities. Written in 1991, Kozol’s work serves as a continuation of Blaustein’s discussion on the failing infrastructure of public education in Camden, providing an updated portrait into the classrooms serving the city’s poorest communities. Kozol pulls no punches in a truly visceral recollection of his visit to Pyne Point Middle School, writing:

…inside, in battered, broken-down, crowded rooms, teem the youth of Camden, with dysfunctional fire alarms, outmoded books and equipment, no sports supplies, demoralized teachers, and the everpresent worry that a child is going to enter the school building armed.[21]

Ultimately, it is inarguable that the physical quality of public schools and educational facilities in Camden was incredibly unequal, reflecting broader residential trends. Where poor, minority-majority neighborhoods experienced a degradation of property values and lived in dilapidated areas of the cities as a direct result of redlining and other racist housing policies, so too were children of color in Camden zoned into old, crumbling school buildings that by this time, barely remained standing, effectively stripping them of the same educational resources and physical comforts provided to white students both in the city and its neighboring suburbs.

            Such inequalities were also present in records of student achievement and morale. Educated in barely-standing school buildings overseen by cash-strapped school districts, students of color in Camden’s poor communities were not afforded nearly the same learning opportunities nor educational resources as white students in the area. In Camden and Environs, Blaustein cites Camden superintendent Dr. Anthony R. Catrambone’s perspective on inequalities in education, writing, “…pupils from Sumner Elementary School (99.8 percent Negro) who transfer to Bonsall Elementary School (50.3 percent Negro) ‘feel unwanted, and that they are having educational problems not experienced by the Negroes who have all their elementary training at Bonsall’ [Catrambone’s words].”[22]

            Thus, it is evident that inequalities in schooling facilities and instruction not only resulted in a considerable achievement gap between students in segregated and integrated communities, but also that such inequalities were clear and demonstrable, even to students themselves at the elementary level. Catrambone’s observation that students from Sumner felt “unwanted” and viewed themselves as struggling, suggests that students in Camden’s segregated neighborhoods internalized the city’s structural inequality, viewing themselves as lesser than their white/integrated peers both in intellectual capacity and personal character. Such perspectives, reinforced by the constant presence of systemic discrimination along racial lines as well as crumbling school facilities and housing units, became deeply entrenched in minds and hearts of Camden’s youth, thereby creating trends of educational failure that were cyclical in nature, reinforced both externally by social structures and institutions as well as internally within segregated communities of color.

            Similarly, dysfunction soon became synonymous with segregated schools and low-income communities of color at the institutional level. School administrators and Boards of Education began to expect failure of students of color, stripping away any opportunity for such schools to prove otherwise. For example, Camden’s school leadership often designated rigorous curriculums and college-preparatory courses to majority-white schools, neglecting to extend the same opportunities to minority-majority districts. For example, in reporting on administrative conversations on the potential integration of Camden High School in 1963, Blaustein observes:

The maintenance of comprehensive academic tracks was recognized by administration as dependent on white students, implying students of color alone were not expected to sustain them: ‘if these pupils [white college preparatory students from the Cramer area] were transferred to Woodrow Wilson [a majority-Black high school located in the Stockton neighborhood], Camden High would be almost entirely a school for business instruction and training in industrial arts.[23]

It is vital to first provide context as to Blaustein’s usage of the terms “business instruction” and “industrial arts.” In utilizing these terms, Blaustein refers primarily to what is referred to as “vocational education” in modern-day America. With this crucial context firmly established, it becomes evident that public educators in early-1960s Camden viewed college education as a racially-exclusive opportunity, to be extended only to white students.

Such attitudes were reflected in the curricular rigor present in Camden’s minority-majority schools which were, to say the least, held to an extremely low standard. The lessons designed for children of color were incredibly simple and non-complex, as schools were treated less as institutions of learning and self-improvement, but rather as detention centers for the city’s disenfranchised youth. As Camden native and historian David Bain writes in the piece Camden Bound, “History surrounds the children of Camden, but they do learn a lot of it in school…Whitman is not read by students in the basic skills curriculum. Few students that I met in Camden High, indeed, had never heard of him.”[24] As such, Black and Hispanic students were effectively set up for failure as compared to white students, viewed as predestined to either not graduate from their primary schooling or to enter lower-paying careers and vocational fields rather than pursue higher education, and opportunities that college afforded students, particularly during this period where college degrees were significantly rarer and highly-valued than in the modern day.

            Thus, it is evident that throughout the mid-twentieth century Camden’s public school system routinely failed Black and Hispanic students. From inequalities in school facilities and curriculum, Camden’s public school system repeatedly communicated to students in segregated areas that they simply were not worth the time and resources afforded to white students, nor possessed the same intellectual capacity as suburban children. Denied quality schools and viewed as predestined high school drop-outs, Camden’s public schools never truly invested in their children, creating an atmosphere of perpetual administrative negligence in improving schools and learning outcomes for the city’s disadvantaged youth. As Blaustein so aptly writes, “‘…the school authorities are against changing the status quo. They want to avoid headaches. They act only when pressures are applied’”.[25]

It is clear that such drastic disparities in learning outcomes arose not only out of administrative negligence, but also as a direct result of segregation within the city. While no law affirming segregation was ever passed in New Jersey, it is clear that schools in Camden were completely and unequivocally segregated, and that a hierarchical structure clearly existed in regards to determining which schools and student populations were most supported and prepared for success. Time and time again, educators favored white students and white schools, kicking students of color and their schooling communities to the curb. It is against this backdrop of negligence and resignation that wider narratives around the city of Camden and its youth as “lost causes” beyond any and all help began to emerge.

By the late twentieth century (specifically the 1980s and 1990s), narratives around Camden as a drug and crime-infested urban wasteland began to propagate, rising to a national scale in the wake of increasing gang activity and rapidly-rising crime rates in the area. While public focus centered on the city’s criminal justice department and woefully-inept political system, reporting on the state of Camden’s public schools served to reinforce perceptions of the city as destined for failure and beyond saving, chiefly through local press’ demonization of Camden’s youth. For example, the Courier Post article “Battle being waged to keep youths from crime”, reads, “‘Girls are being raped in schools, drugs are proliferating, alcohol is proliferating, and instead of dealing with it, some parents and administrators are in denial…they insist it’s not happening in their backyard’”.[26] The manner in this author speaks of public schooling in Camden reads as though the city’s schools and places of education were not learning communities, but rather prisons – the students inhabiting these spaces not children, but prisoners, destined to be nothing more than a “thug”.

  Ignoring the city’s long history with racial segregation and redlining, which as established earlier in this paper, clearly resulted not only in disparities in learning outcomes but also caused a deep internalization of institutional failure within many students of color and their learning communities, articles such as this neglect the willingness to truly explore the roots of crime and poverty in Camden, focusing instead on the result of decades of institutional neglect of communities of color, rather than the root cause of these issues. In doing so, media coverage of such failures in Camden removed the burden of responsibility from the city lawmakers and school administrators responsible for abject poverty and educational disparities, instead putting the onus on the communities which were intentionally and perpetually disenfranchised at the institutional level across all aspects of Camden’s sociopolitical network.

Additionally, this article’s veiled assertion of Camden parents as disinterested and uninvested in their children’s success is especially gross and inaccurate. The fact of the matter is that parents and local communities within even the most impoverished and crime-ridden neighborhoods of Camden had long-lobbied for improvements to public schooling and their communities, concerned chiefly with their children’s futures and opportunities. For example, by the late 1990s, Camden City’s charter network had experienced significant growth, much of its early success owed directly to parents and grassroots organizations devoted to improving the post-schooling opportunities of disadvantaged children. In 1997, over seventeen new charters were approved by the city of Camden, the first opening in September of that year. The LEAP Academy University Charter School was the result of years of political lobbying and relentless advocacy, of which the loudest voices came from parents and community activist groups. Spearheaded by Rutgers University-Camden professor and city native, Gloria Bonilla-Santiago, the LEAP Academy included specific parent action committees, community outreach boards, and sponsored numerous community service events.[27] Thus, this inclusion of virtually one of the only groups truly invested in children of color’s success in Camden alongside the group which repeatedly conspired to confine them to crumbling schools and prepare them only for low-paying occupations is wildly inaccurate and offensive in a historical context, thereby demonstrating how media narratives around Camden and its school system repeatedly disregarded factually-correct reporting, in favor of sensationalized reports on Camden’s struggles, framing schools and city youth as ground zero and progenitors of the wider issues facing the city as a whole.

While community activism was absolutely present across Camden, it is also important to highlight the damaging impact of such negative narratives surrounding the city on its residents. In his book Camden Bound, a literary exploration of the history of Camden and its community, Camden-born historian David Bain highlights the internalization of damaging, sensationalized descriptions of Camden. He writes:

For most of my life, my birthplace, the city of Camden, has been a point of irony, worth a wince and often hasty explanation that though I was born in Camden, we didn’t actually ever live in Camden, but in a succession of pleasant South Jersey suburban towns…As I moved through life…I would write out the name Camden (I’m ashamed to name my shame now) with a shudder.[28]

While Bain’s Camden Bound does relate specifically to his own individual experience and struggle with the acknowledgement of his birthplace in the wake of national infamy, he spends perhaps even more time exploring the current state of the city, as well as the perspectives of current Camden residents. In recounts his most recent visit to Camden, Bain describes nothing short of absolute devastation and complete social blight and urban decay, writing:

Too many newspaper headlines crowd my brain – “Camden Hopes for Release From Its Pain”; “In Struggles of the City, Children Are Casualties”; “Camden Forces Its Suburbs To Ask, What If a City Dies?”; “A Once Vital, Cohesive Community is Slowly, but Not Inevitably, Dying.” And that devastating question from Time: “Who Could Live Here?”…It has been called the poorest city in New Jersey, and some have wondered if it is the poorest in the nation. Adult men and women stand or sit in front of their shabby two- story brick houses, stunned by purposelessness. In abandoned buildings, drug dealers and their customers congregate. On littered sidewalks, children negotiate through broken glass, condoms, and spent hypodermics.[29]

Judging from Bain’s simple description of the sights that he witnessed while driving through Camden, it is evident that Camden’s residents have been burned out by the widely-circulating narratives of the city and its national infamy. The vast majority of residents poverty-stricken and lacking the financial or social capital to create meaningful change for their communities themselves, such headlines and narratives of the city were nothing short of absolutely devastating. Such soul-crushing portrayals signal yet another air of perpetual negligence and resignation by powerful voices, within the media, local politics, and even national government, thus demonstrating a national perception of Camden as “failed”, and were thus internalized by Camden’s residents.

For example, in interviewing Rene Huggins, a community activist and director of the Camden Cultural Center, Bain chiefly relays her frustration with recent state legislation upon the assumption of office by Republican governor Christine Todd Whitman and recent rollbacks of welfare programs, occupational training, and educational funding that had been promised to the city. Speaking on the increasing hopelessness of many city residents, Huggins states, “And on top of all that…we get that headline in Time magazine – ’Who Could Live Here?’ Why not just give us a lot of shovels and bury the place?’”.[30] Such statements, alongside Bain’s experiences of Camden, thus demonstrate that as a direct result of national resignation to the state of Camden and a lack of willingness nor initiative to improve the city (and even more damaging, a removal of resources and social initiatives designed specifically to improve the state of the city), many Camden residents adopted a similar mentality of resignation and shame toward their community, choosing to simply exist with the city’s misery as opposed to creating any real, meaningful change, having been spurned and failed by various powerful sociopolitical institutions and organizations across generations, thereby reinforcing the harmful narratives that had played such a crucial role in the development of such behaviors.

The very article mentioned in ire by Ren Huggins, Kevin Fedarko’s “Who Could Live Here?”, also offers insight into public perceptions of Camden and more specifically, its youth, during the late twentieth-century. Written in 1992, Fedarko postures the city of Camden as a barren wasteland and its inhabitants – predominantly young people and children – as akin to nothing more than prisoners and criminals. For example, Fedarko writes:

The story of Camden is the story of boys who blind stray dogs after school, who come to Sunday Mass looking for cookies because they are hungry, who arm themselves with guns, knives and — this winter’s fad at $400 each — hand grenades. It is the story of girls who dream of becoming hairdressers but wind up as whores, who get pregnant at 14 only to bury their infants.[31]

Fedarko’s description of Camden’s children is extraordinarily problematic, in that it not only treats the city’s youth as a monolithic group, but then proceeds to demonize them en masse. In describing the city’s young people as baselessly sadistic and violent, while neglecting to position rising youth crime rates in the context of historical disenfranchisement nor take a moment and pause to acknowledge that this is not the case for all of the city’s young people, Fedarko’s work only furthers narratives of Camden and its young people as lawless and destined for jail cells rather than degrees. In particular, Fedarko’s description of Camden’s young women as “whores” is especially gross, considering the fact that the people of whom Fedarko speaks are children, thereby applying unnecessary derogatory labels to young women (largely women of color), while failing to acknowledge the true tragedy of Camden and the conditions to which young people are subjected to. In describing the situation of a teenager involved in gang activity, Fedarko also employs similarly disrespectful and dehumanizing language, writing:

…drug posses …use children to keep an eye out for vice- squad police and to ferry drugs across town. Says “Minute Mouse,” a 15- year-old dealer: “I love my boys more than my own family.” Little wonder. With a father in jail and a mother who abandoned him, the Mouse survived for a time by eating trash and dog food before turning to the drug business.[32]

Ultimately, it is evident that during the late twentieth century, specifically the eighties and nineties, narratives surrounding Camden portrayed the city as nothing more than an urban wasteland and lost cause, a sad excuse for urban existence that eschewed its history as a sprawling manufacturing juggernaut. More damaging however, were narratives surrounding the people of Camden (especially youth), who became synonymous with violence and criminal activity, rather than opportunity or potential. In short, media coverage of Camden was concerned chiefly with the concept of an urban space and people in chaos and thus, prioritized the spectacle of Camden’s failures over the historical tragedy of the city, neglecting to situation the former in the context of self-imposed de facto segregation and racialized disenfranchisement.

Ultimately, it cannot be denied that perceptions of Camden’s public education system as failing and its youth as morally debased were absolutely essential to the formulation of “lost cause” narratives regarding the city. In the popular imagination, Camden became synonymous with decay and dysfunction—a city transformed from a thriving industrial hub into what national headlines would later call “Murder City, U.S.A.” However, these narratives of inevitability in truth emerged from the city’s long history with racial segregation, economic turmoil, and administrative educational neglect. Camden’s schools were central to this development, acting as both products and producers of inequity, serving as clear symbols of the failures in public policy, which were later recast as moral shortcomings of disenfranchised communities themselves.

As demonstrated throughout this study, the structural roots of Camden’s failures in public education were grounded in segregation, manufactured by the same redlining maps and exclusionary residency policies that confined families of color to the city’s most desolate neighborhoods, which would also determine the boundaries of their children’s schools. White flight and suburban migration drained Camden of its capital and tax base, instead concentrating such resources in suburban communities whose already-existing affluence was only reinforced by federal mortgage programs and social support. Historical inquiry into urban decline and the state of urban communities in the postwar period have long since emphasized the importance of understanding urban segregation not as a natural social phenomenon, but rather an architectural inequity, extending into every aspect of civic life and education. Camden’s experience confirms this: segregation functioned not only as a physical division of space but as a moral and ideological one, creating the conditions for policymakers and the media to portray the city’s public schools as evidence of cultural pathology rather than systemic betrayal.

By the late twentieth century, these narratives had become fatalistic. Newspaper headlines depicted Camden’s classrooms as sites of chaos and its youth as violent, transforming real inequities into spectacle. The children who bore the weight of these conditions—students of color educated in crumbling buildings and underfunded programs—were cast as perpetrators of their city’s demise rather than its victims. The label “Murder Capital” distilled these complexities into a single, dehumanizing phrase, erasing the structural roots of decline in favor of a narrative that made Camden’s suffering appear inevitable. In doing so, public discourse not only misrepresented the city’s reality but also justified further disinvestment, as policymakers treated Camden’s collapse as a moral failure rather than a product of policy.

However, despite such immense challenges and incredibly damaging narratives that had become so deeply entrenched in the American national psyche regarding the city, Camden and its inhabitants persisted. Refusing to give up on their communities, Camden’s residents, many of whom lacking the influence and capital to create change alone, chose to band together and weather the storm of national infamy. From community activism to political lobbying, Camden’s communities of color demonstrated consistent self-advocacy. Viewing outside aid as perpetually-promised yet never provided, Camden’s communities pooled their resources and invested in their own communities and children, establishing vast charter networks as well as advocating for criminal justice reform and community policing efforts.

While change was slow and seemingly unattainable, Camden has experienced a significant resurgence in the past decade or so. From investment by major corporations and sports organizations (for example, the Philadelphia 76ers’ relocation of their practice facilities and front offices to the Camden Waterfront in 2016) as well as a revitalization of educational access and recruitment of teaching professionals by the Camden Education Fund, the city has slowly begun to reverse trends of decay and decline, pushing back against narratives that had deemed its failure as inevitable and inescapable. Celebrating its first homicide-free summer this year, Camden’s story is tragic, yet far from over. Rather than adhere to the story of persistent institutional failure and disenfranchisement, Camden’s residents have chosen to take charge of the narrative of their home and communities for themselves, changing it to one of perseverance, determination, and strength. In defiance of decades of segregation, disinvestment, and stigma, Camden stands not as America’s “Murder City,” but as its mirror—a testament to how injustice is built, and how, through resilience, effort, and advocacy, it can be torn down.

 “The case for charter schools,” Courier Post, March 02, 1997

Bain, David Haward. “Camden Bound.” Prairie Schooner 72, no. 3 (1998): 104–44. http://www.jstor.org/stable/40637098 

Beauregard, Robert A. Voices of Decline: The Postwar Fate of U.S. Cities. 2nd ed. New York: Routledge, 2003 http://www.123library.org/book_details/?id=112493

Blaustein, Albert P., and United States Commission on Civil Rights. Civil Rights U.S.A.: Public Schools: Cities in the North and West, 1963: Camden and Environs. Washington, DC: United States Commission on Civil Rights, 1964.

Douglas, Davison M. “The Limits of Law in Accomplishing Racial Change: School Segregation in the Pre-Brown North.” UCLA Law Review 44, no. 3 (1997): 677–744.

Fedarko, Kevin. “The Other America.” Time, January 20, 1992. https://content.time.com/time/subscriber/article/0,33009,974708-3,00.html

Gillette, Howard. Camden after the Fall: Decline and Renewal in a Post-Industrial City. Philadelphia: University of Pennsylvania Press, 2005.

Goheen, Peter G., and Arnold R. Hirsch. “Making the Second Ghetto: Race and Housing in Chicago, 1940-1960.” Labour / Le Travail 15 (1985): 234. https://doi.org/10.2307/25140590

Kozol, Jonathan. Savage Inequalities: Children in America’s Schools. New York: Broadway Books, 1991.

Rasmussen, Chris. “Creating Segregation in the Era of Integration: School Consolidation and Local Control in New Brunswick, New Jersey, 1965–1976.” History of Education Quarterly 57, no. 4 (2017): 480–514. https://www.jstor.org/stable/26846389

Rothstein, Richard. The Color of Law : A Forgotten History of How Our Government Segregated America. First edition. New York: Liveright Publishing Corporation, a division of W.W. Norton & Company, 2017.

Sugrue, Thomas J. The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit. Princeton, NJ: Princeton University Press, 1996.

Tantillo, Sara. “Battle being waged to keep youths from crime,” Courier Post, June 8, 1998

Yaffe, Deborah. Other People’s Children: The Battle for Justice and Equality in New Jersey’s Schools. New Brunswick, NJ: Rivergate Books, 2007. https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=225406


[1] Peter G. Goheen and Arnold R. Hirsch. “Making the Second Ghetto: Race and Housing in Chicago, 1940-1960.” Labour / Le Travail 15 (1985): 234.

[2] Richard Rothstein. The Color of Law : A Forgotten History of How Our Government Segregated America. First edition. New York: Liveright Publishing Corporation, a division of W.W. Norton & Company, 2017.

[3] Peter G. Goheen and Arnold R. Hirsch. “Making the Second Ghetto: Race and Housing in Chicago, 1940-1960.” Labour / Le Travail 15 (1985): 234.

[4] Chris Rasmussen. “Creating Segregation in the Era of Integration: School Consolidation and Local Control in New Brunswick, New Jersey, 1965–1976.” History of Education Quarterly 57, no. 4 (2017): 480–514.

[5] Robert A. Beauregard. Voices of Decline: The Postwar Fate of U.S. Cities. 2nd ed. New York: Routledge, 2003.

[6] Thomas J. Sugrue. The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit. Princeton, NJ: Princeton University Press, 1996.

[7] Howard Gillette, Camden after the Fall: Decline and Renewal in a Post-Industrial City (Philadelphia: University of Pennsylvania Press, 2005), 12–15.

[8] David Howard Bain, “Camden Bound,” Prairie Schooner 72, no. 3 (1998): 104–44.

[9] Chris Rasmussen,. “Creating Segregation in the Era of Integration: School Consolidation and Local Control in New Brunswick, New Jersey, 1965–1976.” History of Education Quarterly 57, no. 4 (2017): p.487

[10] Richard Rothstein, The Color of Law: A Forgotten History of How Our Government Segregated America (New York: Liveright, 2017), 70–75; Gillette, Camden after the Fall, 52–54.

[11] Gillette, Camden after the Fall, 45–50; Bain, “Camden Bound,” 110–12.

[12] Thomas J. Sugrue, The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit (Princeton, NJ: Princeton University Press, 1996), 35–40.

[13] Beauregard, Robert A. Voices of Decline : The Postwar Fate of U.S. Cities. Second edition. New York: Routledge, 2003, 91

[14] Gillette, Camden after the Fall, 50–55; Bain, “Camden Bound,” 120.

[15]Albert P. Blaustein, Civil Rights U.S.A.: Camden and Environs, report to the U.S. Civil Rights Commission, 1963, 22.

[16] Blaustein, Civil Rights U.S.A., 23–24.

[17]Davison M. Douglas, “The Limits of Law in Accomplishing Racial Change: School Segregation in the Pre-Brown North.” UCLA Law Review 44, no. 3 (1997)

[18] Blaustein, Civil Rights U.S.A., 18.

[19] Blaustein, Civil Rights U.S.A., 18.

[20] Blaustein, Civil Rights U.S.A.,

[21] Kozol, Jonathan. Savage Inequalities : Children in America’s Schools. New York: Broadway Books, an imprint of the Crown Publishing Group, a division of Random House, Inc., 1991.

[22] Blaustein, Civil Rights U.S.A., 22.

[23] Blaustein, Civil Rights U.S.A.,

[24] Bain, David Haward. “Camden Bound.” Prairie Schooner 72, no. 3 (1998): 120-121.

[25] Blaustein, Civil Rights U.S.A.,

[26] “Battle being waged to keep youths from crime,” Courier Post, June 8, 1998

[27] Sarah Tantillo, “The case for charter schools,” Courier Post, March 02, 1997

[28] Bain, Camden Bound, 108-109.

[29] Bain, Camden Bound, 111.

[30] Bain, Camden Bound, 119.

[31] Kevin Fedarko, “The Other America,” Time, January 20, 1992

[32] Ibid.

From Right to Privilege: The Hyde Amendment and Abortion Access

When the Supreme Court made their decision on Roe v. Wade in 1973, it seemed as though abortion had finally been secured as a constitutional right. However, this ruling came after more than a century of contested abortion law in the United States. Beginning in the late nineteenth century, the American Medical Association led campaigns to criminalize abortion, which pushed midwives and women healers out of reproductive care[1]. Illegal abortion had been widespread and dangerous; even in the early twentieth century, physicians estimated that thousands of abortions were done annually, many of them resulting in septic infections and hospitalizations[2]. Long before Roe, access to reproductive care was already shaped by race and class, as Rickie Solinger shows in her study of how unwed pregnancy was treated differently for white women and women of color[3]. Within a few years after the Roe v. Wade decision, the promise of abortion access was strategically narrowed. In 1976, Congress passed the Hyde Amendment, which banned the use of federal Medicaid funds for most abortions. This did not overturn Roe v Wade, but it did quietly transform abortion from a legal right into an economic privilege, one that poor women could rarely afford to exercise. As Susan Gunty bluntly stated, “The Hyde Amendment did not eliminate abortion; it eliminated abortion for poor women.”[4] The Hyde Amendment redefined abortion rights by turning a constitutional guarantee into a privilege dependent on income. It represented a shift in strategy among anti-abortion advocates, where instead of directly challenging Roe, they targeted public funding.[5] Representative Henry Hyde himself admitted that his goal was total abortion prohibition, but that the only vehicle available was the Medicaid bill.

Historian Maris Vinovskis emphasizes that this marked a turning point where anti-abortion lawmakers learned to restrict access not by banning abortion, but by eliminating the means to obtain it. They used the appropriations process to accomplish “what could not be achieved through constitutional amendment.”[6] By embedding abortion restrictions into routine spending bills, lawmakers created a powerful way to undermine Roe without technically violating it. An immediate effect was the creation of a two-tiered system of reproductive rights. Wealthier women could continue to obtain abortions, while lower income women, like those on Medicaid, were forced to carry their pregnancies to term. The Supreme Court validated this in Maher v. Roe in 1977 and in Harris v. McRae in 1980, maintaining that while the Constitution guaranteed the right to abortion, it did not require the government to make that right financially accessible to all. As the court stated, the government “need not remove [obstacles] not of its own creation.”[7] This logic fit neatly into the rise of the New Right. The fetus was being recast as a protected moral subject, and as Sara Dubow describes it, it was transformed into “a symbol of national innocence and moral purity.”[8] At the same time, historian Linda Gordon brings up that public funding has never been neutral, and has always reflected judgements about which women should bear children and which should not.[9] In this way, Hyde did not invent reproductive inequality, but it definitely sharpened it.

This raises the question of how the Hyde Amendment reshaped abortion access in the United States between 1976 and 1999, and why it disproportionately affected poor women and women of color. This paper argues that the Hyde Amendment transformed abortion from a constitutional right into an economic privilege. By restricting Medicaid funding, the amendment created a two-tier system of reproductive access in which poor women and women of color were effectively denied the ability to exercise a legal right.

Historians who study reproduction agree that abortion in the United States has always been shaped by race, class, and power. Linda Gordon shows that reproductive control has never been distributed equally, as wealthier white women have long had greater access to contraception and abortion, while poor women and women of color faced barriers or state interference[10]. Johanna Schoen adds to this by examining how public health and welfare systems sometimes pushed sterilization or withheld care, showing that the state has often intervened most heavily in the reproductive lives of marginalized women[11]. Together, these historians argue that Hyde fits into a much older pattern of the government regulating the fertility of women who had the least political power.

Another group of historians focuses on law, policy, and the political meaning of abortion in the late twentieth century. Michele Goodwin analyzes how legal frameworks that claimed to protect fetal life often limited women’s autonomy, especially poor women[12]. Maris Vinovskis explains how anti-abortion lawmakers learned to use the appropriations process to restrict abortion access without challenging Roe directly[13]. Meanwhile, Sara Dubow traces how the fetus became a powerful cultural symbol, which helped conservatives rally support for funding restrictions like Hyde[14]. These scholars help explain how Hyde gained legitimacy both legally and culturally, and why it became such a durable policy.

A third set of historians look at activism, feminism, and the reshaping of abortion politics in the 1970s and 1980s. Rosalind Petchesky shows how abortion became central to the rise of the New Right, as antifeminist and religious groups used the issue to organize a broader conservative movement[15]. Loretta Ross and other reproductive justice scholars explain how women of color challenged the narrow “choice” framework of the mainstream pro-choice movement, arguing that legality meant little without the resources needed to make real decisions[16]. Their work highlights that Hyde did not only restrict abortion for poor women, but also pushed activists to rethink what reproductive rights should even look like.

Taken together, these historians show three major themes of long-standing inequality in reproductive politics, legal tools reinforcing those inequalities, and the political shifts that made Hyde a defining part of conservative identity. What is still less explored, and where this paper enters the conversation, is how the Hyde Amendment created a two-tier system of abortion access between 1976 and 1999, and how that funding gap turned a constitutional right into an economic privilege. This paper brings these together to show how policy, law, and inequality reshaped the meaning of abortion rights in the United States.

The Hyde Amendment did not appear out of nowhere, and rather developed in a very particular political movement where abortion had become one of the most emotionally charged issues in American politics.[17] After Roe v. Wade legalized abortion nationwide in 1973, opponents of abortion had to reconsider their strategy.[18] They could no longer rely on state criminal bans, since they were now unconstitutional. Therefore, instead of attempting to outlaw abortion directly, they began to look for indirect ways to limit who could actually get one. The question became not whether abortion was legal or unconstitutional, but whether it was accessible.[19] This shift happened at the same time that the country was experiencing a wave of distrust towards the federal government after Watergate, along with concerns about inflation and federal spending.[20] Additionally, as a movement over family values escalated, the federal government was infringing on families’ privacy and rights. These anxieties made it easier to frame abortion as both a moral issue, and a financial one as well. Historian Maris Vinovskis notes that the Hyde Amendment represented a new strategy, shifting away from trying to overturn Roe and towards “an effort to restrict the practical ability to obtain abortions through funding limitations.”[21] Anti abortion lawmakers realized that they were still able to limit abortions by cutting off the financial aid that allowed poor women to get them.[22]

To understand this shift, it is important to recognize that the abortion debate had already intensified in the years leading up to Roe. During the late 1960s and early 1970s, Catholic organizations such as the National Right to Life Committee had begun mobilizing against abortion laws in states like New York and California[23]. At the same time, Medicaid, which was created in 1965 as part of the War on Poverty, became central to debates about welfare spending and the moral regulation of poor women[24]. Because Medicaid disproportionately served low income women and women of color, it became an early battleground for questions about who deserved state funded healthcare and reproductive autonomy.

Representative Henry Hyde was the first major figure behind this effort, and he did not try to hide his intentions. During debate in Congress, he stated “I certainly would prevent, if I could legally, anybody having an abortion; unfortunately, the only vehicle available is the Medicaid bill.”[25] He made it clear that the Amendment was not about government budgeting or fiscal responsibility, but was about restricting abortion access by targeting low income women who depended on Medicaid.[26] This strategy also lined up quite well with emerging political alliances, as fiscal conservatives who opposed federal spending could support Hyde because it reduced a publicly funded service.[27] At the same time, religious conservatives who morally opposed abortion also supported Hyde. The idea of “taxpayer conscience”, or that people should not have to financially support something they disagree with, became an effective talking point.[28] However, this strategy also drew on a much longer history of the government controlling the reproductive lives of women, especially poor women and women of color. Nellie Gray, the March for Life national director, made a statement in a 1977 news journal explaining that “pro-life organizations will only have one chance at a human rights amendment and they must do it right by seeing to it that abortion is not permitted in the United States[29].” Gray’s warning reflected how strongly anti-abortion leaders viewed Hyde as a stepping stone toward a much larger project of restricting abortion nationwide. Her statement also highlighted the growing belief among conservative activists that federal funds could be used to reshape reproductive policy, which would disproportionately affect the same women who have already consistently been targeted.

Throughout the twentieth century, the state encouraged childbirth among white, middle class women while discouraging it among women considered “undesirable”, which often meant poor women, Black women, Native American women, etc.[30] In this sense, the Hyde Amendment fit into an existing pattern of allowing privileged women to maintain reproductive autonomy, while placing the greatest burden onto those already facing economic and racial inequality. The structure of the Amendment also built inequality directly into access. Since Hyde was attached to the federal appropriations bill for health and welfare, it had to be renewed every year.[31] This meant that each year, Congress debated what exceptions should be allowed, and whether Medicaid would cover abortion in cases of rape, incest, or a threat to the mother’s life.[32] These exceptions were often extremely limited, difficult to qualify for, or inconsistent across states.[33] When in practice, they rarely resulted in meaningful access.

The impact of Hyde was immediate and severe. There was an enormous drop in Medicaid funded abortions, and while states were technically allowed to use their own funds to pay for abortion services, most did not.[34] As a result, abortion access quickly became dependent not only on personal income, but also on geography. A woman’s ability to exercise a supposedly constitutional right now depended on which state she lived in and whether she had the financial means to pay out of pocket.

By the late 1970s, the Hyde Amendment had created a two-tiered system of reproductive access. Abortion was still legal, but the ability to obtain one became tied to class and race.[35] For many women on Medicaid, especially Black and Latina women who were already disproportionately represented among low-income populations, the right to choose existed only in theory.[36] What Hyde actually accomplished was a shift from abortion as a universal and constitutional right to abortion as something you had to be able to afford. In this way, Hyde did not just restrict funding, it redefined what rights meant in the United States. It showed that a right could remain legally intact, yet still be functionally unreachable for some.

After the Hyde Amendment was passed in 1976, it quickly faced legal challenges from abortion rights advocates who argued that cutting off Medicaid funding violated the constitutional protections that Roe v. Wade put in place.[37] Their basic argument being that if the government recognized the right to choose abortion, then it should not be allowed to create conditions that made that right impossible to exercise.[38] In other words, they argued that a right without access is not really a right at all. However, when these cases reached the Supreme Court, the Court ultimately sided with the federal government, which confirmed that the state could acknowledge a right while also refusing to make it materially available. The first major decision was Maher v. Roe in 1977. This case involved a Connecticut rule that denied Medicaid funding for abortions even when the state continued to cover costs associated with childbirth.[39] The plaintiffs argued that this policy violated the Equal Protection Clause by treating poor women differently from those who could pay privately.[40] However, the Supreme Court rejected this argument, and in the majority opinion stated that “the Constitution does not confer an entitlement to such funds as may be necessary to realize the full advantage of the constitutional freedom.”[41] This reveals the Court’s broader stance, as the justices separated the idea of a right from the state’s obligation to make that right actually meaningful. By framing funding as an “entitlement,” the Court implied that financial accessibility was a luxury, not a constitutional requirement. This language helped transform abortion from a guaranteed right into a conditional one, depending on a woman’s financial status.

This reasoning set the stage for a more consequential case, Harris v. McRae. In 1980, this case dealt specifically with the constitutionality of the Hyde Amendment.[42] The plaintiffs again argued that denying Medicaid funding effectively denied the right to abortion to poor women. They also argued that Hyde violated the Establishment Clause because it reflected religious beliefs, particularly those of the Catholic Church.[43] However, the Court upheld the Amendment, and Justice Potter Stewart wrote for the majority, stating that although the government “may not place obstacles in the path of a woman seeking an abortion, it need not remove those not of its own creation.”[44] This distinction allowed the Court to reinterpret poverty not as a structural condition shaped by state policy but as an individual misfortune that is outside of constitutional concern. Fayle Wattleton, president of Planned Parenthood Federation, challenged the courts findings, stating that “the court has reaffirmed that all women have a constitutionally protected right to an abortion, but has denied poor women the means by which to exercise that right[45].” Scholars like Michele Goodwin have also argued that this logic effectively weaponized economic inequality by making it a neutral, legally permissible barrier to reproductive autonomy[46]. The court drew a clear line between legal rights and material access, claiming that the Constitution protects the first and not the second.

The distinction between rights and access became one of the most influential and damaging ideas in later abortion policy. The Court’s logic suggested that if poverty prevented a woman from obtaining an abortion, that was simply her personal situation and not something the government was responsible for addressing.[47] Though for poor women, this effectively meant that the right to abortion was conditional on wealth. Justice Thurgood Marshall pointed this out directly in his dissent, arguing that the decision reduced the right to choose to “a right in name only for women who cannot afford to exercise it.”[48] Marshall understood that legal recognition was meaningless when economic barriers stood in the way. Historians and legal scholars have also pointed out that these rulings reflected broader anxieties about welfare and poor women’s reproductive autonomy. Johanna Schoen notes that after Hyde, “the issue was no longer legality but economic access. The ability to choose became a measure of one’s class position.”[49] The Court’s decisions essentially cast poverty as a private problem, not a systemic barrier. By accepting the argument that the state did not have to fund abortions, the Court allowed economic inequality to become a legal tool for shaping reproductive outcomes.

The Harris decision also intensified racial disparities in reproductive healthcare, and since women of color were disproportionately represented among Medicaid recipients, they experienced the most direct consequences of the Amendment. Linda Gordon argues that policies like Hyde fit into a longer pattern where the state has “regulated fertility more tightly among poor women and women of color.”[50] This meant that Hyde did not simply limit abortion funding, but it also reinforced existing racial and economic hierarchies within reproductive control. The immediate impact of these decisions can clearly be seen in the data. In states that fully implemented the Hyde restrictions, Medicaid funded abortions dropped by more than ninety nine percent, essentially disappearing within the first year.[51] Clinics that had relied on Medicaid reimbursement closed, and in many communities, the nearest clinic became hours away.[52] For low income women, the cost of travel, time off from work, and childcare created many new layers of burden on top of the medical expense itself.[53]

Once the Supreme Court upheld the Hyde Amendment in Harris v McRae, abortion access in the United States became uneven, and heavily dependent on geography and income. Even though Roe v. Wade technically still guaranteed the constitutional right to abortion, the Hyde Amendment meant that states were able to decide whether they would use their own funds to support abortion services for Medicaid recipients. This resulted in what many scholars describe as a patchwork system of reproductive access, where a woman’s ability to exercise her rights depended on her ZIP code and her bank account instead of a universal legal standard.[54] Since Black, Latina, and Native women were disproportionately represented among low income Medicaid recipients, it is clear that the restrictions had a racial impact, even if the policy did not mention race outright.

This pattern was not new, as Johanna Schoen writes that “the state has historically encouraged childbirth among white, middle class women while discouraging it among poor women and women of color.”[55] Hyde simply reshaped that older system into a modern one, using funding instead of forced sterilization or criminal statutes. Public funding decisions always reflect judgements about who should reproduce and who should not, or in other words, which lives were valued and which were not.[56] Meanwhile, the procedures themselves became more expensive and more difficult to access. Without Medicaid coverage, many women had to delay their abortions while they gathered money to pay for the procedure. This then led to abortions being performed at later gestational stages which made them more medically complicated and more costly. As Schoen explains, delays caused by funding restrictions increased both physical risk and emotional strain for patients.[57] Clinics in poorer regions, especially in the south and midwest, struggled to stay open without Medicaid reimbursement, which left many areas without any providers at all.[58] The combination of travelling long distances and making arrangements to pause their lives for the time being was much harder for lower income women than it would have been for wealthier women. The cost of abortion became a structural burden, one created by the conditions of poverty. For many women, these obstacles made abortion inaccessible, even if they technically had the legal right to obtain one.

By upholding Hyde, the Supreme Court effectively established this two-tiered system, with the Court confirming that constitutional rights did not guarantee the means to exercise them. Reproductive autonomy was made dependent on individual financial circumstances and the state level political culture. The legal battles following Hyde clarified this, and made it clear that the fight over abortion would be decided by who could afford it.

By the 1980s, the Hyde Amendment had become more than a funding restriction. It became a symbol. Beginning in 1976 as a policy decision buried in the federal budget, it grew into one of the defining features of the conservative movement. Hyde showed how questions about family, morality, and religion could be folded into debates about government spending, which linked fiscal and moral conservatism.[59]

Before the late 1970s, abortion had not been clearly split along party lines. There were liberal Republicans who supported Roe v Wade, and conservative Democrats who opposed abortion. But this political landscape changed dramatically as the New Right emerged. Evangelical leaders like Jerry Falwell and Paul Weyrich mobilized conservative Christians around issues such as school desegregation, the Equal Rights Amendment, and sex education[60]. Abortion became the unifying issue they needed, which was a morally charged topic that could bind fiscal conservatives, religious traditionalists, and states’ rights advocates. The political backlash against Roe occurred at the same time that the evangelical Christians were becoming more politically organized.[61] Hyde provided a concrete policy issue around which these groups could mobilize, and helped them forge a new partisan identity. These debates that began over funding became part of a larger cultural conflict about the meaning of family, sexuality, and arguably, national values. The rhetoric that surrounded the Hyde Amendment reflected this shift, because instead of discussing abortion primarily in terms of women’s autonomy or health, conservatives increasingly framed the debate around the fetus. Sara Dubow argues that by the 1980s, the fetus had come to symbolize “a national innocence and moral purity,” a life seen as separate from the woman and one deserving of state protection.[62] This transformation was crucial because it allowed abortion opponents to present themselves as protecting vulnerable life instead of restricting women’s rights.

President Ronald Reagan played a major role in pushing this narrative. Although he had signed an abortion reform law when he was governor of California, by the time of his presidency in 1980, he had fully embraced the anti-abortion cause. In his 1983 essay “Abortion and the Conscience of the Nation,” he argued, “We cannot diminish the value of one category of human life, the unborn, without diminishing the value of all human life.”[63] With this statement, Reagan tied abortion to a broader moral crisis, suggesting that perhaps the nation’s character and spirituality were at stake. This argument resonated strongly with any evangelicals who had helped usher him into office, as he frequently spoke about the United States as a nation in need of moral renewal. His rhetoric helped solidify abortion as a moral anchor in the conservative identity, and made support for Hyde a test for Republican lawmakers.[64] In this environment, opposing the Hyde Amendment became politically risky, as it could be interpreted as rejecting the moral vision that Reagan had tied so closely to national identity.

Meanwhile, the Hyde Amendment’s budget framing allowed conservatives to present the issue in the language of limited government rather than explicitly presenting it as moral regulation. The idea that taxpayers should not be forced to support abortion with public funds gained traction among people who might not have outright embraced the anti-abortion movement. As Maris Vinovskis explains, Hyde represented a new style of policy making in which moral goals were pursued through fiscal restrictions rather than constitutional bans.[65] It was a quieter and more durable form of regulation.

Blending moral politics and fiscal conservatism also helped solidify the broader culture wars of the 1980s and 90s. Issues like school prayers, sex education, gay rights, and welfare reform became linked together as defending “traditional values.”[66] The Hyde Amendment fit neatly into this framework, allowing conservatives to argue that they were simultaneously protecting unborn life and protecting taxpayers from government overreach.[67] They saw abortion as both a moral failure and a misuse of public funds. However, this shift also made it increasingly difficult for Democrats to maintain a unified position on abortion. While most Democratic lawmakers supported the legal right to abortion, many were hesitant to outright oppose the Hyde Amendment, avoiding the risk of being labeled as anti-religion.[68] As a result, the amendment was repeatedly renewed with bipartisan support. A newspaper article from 1993 discussed the twenty years post-Roe, stating that Hyde displayed a “masterful understanding of the rules, procedures, and time constraints of the House,” as he “rounded up 254 of his colleagues (including 98 Democrats) to sustain [his amendment] and prohibit federal funding to pay for abortions for poor women[69].” The article clearly showed that Hyde’s durability did not only rest on conservatives but on a bipartisan reluctance to challenge Hyde as it was framed as fiscally responsible and morally protective.

By the 1990s, the logic behind Hyde had become ingrained in national political identity. The idea that abortion was something the government should not fund became widely accepted. This masked the fact that Hyde had made abortion a class dependent right, one available to those who could afford it and inaccessible to those who could not.[70] It played a key role in shaping these culture wars, by turning the reproductive choices of women into questions of morality and national identity, instead of questions of justice and autonomy.

The widening inequalities created by the Hyde Amendment did more than restrict access, as they exposed the limits of the existing pro-choice framework and set the stage for a new kind of activism. The measures taken by states may have seemed procedural, but combined with the lack of funding, they created this maze of barriers for low income women. Before the inequalities created by Hyde pushed activism in new directions, the reproductive rights movement of the 1970s was dominated by second wave feminist organizations such as NOW and NARAL[71]. These groups framed abortion primarily through the language of privacy and individual choice, relying heavily on Roe’s constitutional logic[72].Yet this framework was limited. It often centered around middle class white women and assumed that once legal barriers were removed, access would naturally follow[73]. Poor women, women of color, and immigrant women repeatedly testified that legality meant little without affordable care, transportation, or childcare[74]. Their experiences highlighted structural inequalities that mainstream pro choice rhetoric did not address. By the late 1980s and 1990s, many reproductive rights organizations began referring to the United States as having two systems of abortion access. In wealthier states, where medicaid or state funds covered abortion, access remained relatively stable. However, in other states, abortion access had become severely limited. The concept of “choice,” which had been the foundation of pro-choice activism, no longer fit the reality. Abortion had shifted from a universal constitutional right to a right that had to be purchased. The Hyde Amendment redrew the map of reproductive freedom, determining where and to whom abortion was available.

While the Hyde Amendment strengthened the conservative movement and reshaped how abortion was discussed in national politics, it also pushed reproductive rights activism in a new and beautiful direction. In the 70s, many mainstream feminist organizations had framed abortion mainly as a matter of individual choice, drawing directly from the privacy language of Roe v. Wade.[75] The assumption was that if abortion was legal, women would be able to access it. But Hyde made it clear that legality and access were not the same thing, and that the concept of “choice” was far less meaningful for women who could not afford the procedure in the first place. At first, mainstream pro-choice organizations struggled to respond. Groups like the National Association for the Repeal of Abortion Laws (now known as Reproductive Freedom for All) and NOW (the National Organization for Women) continued to fight Hyde through legislative appeals and court challenges, and focused on restoring Medicaid coverage.[76] However, these strategies were slow and had little success. Contemporary reports show how quickly grassroots feminist activism responded to Hyde. A 1979 Delaware County Daily Times article described more than forty NOW members and NARAL activists picketing a congressional dinner attended by Henry Hyde[77]. Protesters carried signs reading “Poor people don’t have a choice about my body,” and NOW’s Delaware County president Debbie Rubin told reporters that the Hyde Amendment “eliminates all abortions for poor women except when the life of the mother is in danger[78].” She warned that measures like Hyde did not stop abortion but instead “force a return to back-alley and self-inflicted abortions[79].” Meanwhile, women who were directly affected by Hyde were left to find practical ways to access the care they needed. This led to the early development of abortion funds, which were community based efforts in which volunteers raised money to help low income women pay for their abortions.[80] These funds showed that access could be supported by mutual aid and grassroots networks.

The deeper and more transformative opposition to Hyde came from activists who were already organizing around healthcare inequality, racism, and economic justice. The focus was on the fact that the same systems that restricted abortion access also failed to provide basic healthcare, childcare, housing, and social support.[81] For many women of color, the issue was not only the right to end a pregnancy, but also the right to raise children safely and with dignity[82]. This perspective was rooted in a longer history, as poor women and women of color had often faced contradictory and coercive forms of reproductive control, being denied contraception and abortion.[83] The Hyde Amendment did not create this dynamic, though it did extend it into the post-Roe era by making abortion services unattainable to those without financial resources. Linda Gordon notes that decisions about public funding have long reflected judgments about which women should bear children and which should not, and Hyde reinforces exactly this kind of hierarchy.[84]

By the early 1990s, these critiques began to merge into a new framework known as Reproductive Justice. This term was coined by a group of Black women activists in 1994 who argued that the mainstream pro-choice movement was focusing too narrowly on the legal right to abortion, ignoring the economic and social barriers that shaped many women’s decisions when it came to having an abortion.[85] They insisted that reproductive freedom was not only about ending a pregnancy, but was also about having the conditions necessary to make and sustain meaningful choices in the first place.[86] Reproductive autonomy clearly required more than just legal permission to have an abortion. Access to healthcare, living wages, and safe housing are only a few resources that help in the fight for reproductive autonomy[87]. Organizations like SisterSong, founded in 1997, helped establish reproductive justice as a national movement[88]. It brought together Black, Latina, Indigenous, and Asian American women to argue that reproductive rights should be understood as human rights, grounded more in equality than just privacy.[89] Their work highlighted that access to abortion, childcare, healthcare, and racial and economic justice were all deeply connected. The activism that emerged in response to the Hyde Amendment did not simply resist the policy, but it reframed the entire conversation about reproductive rights and freedoms. “Choice” was an incomplete framework, usually centered on the experiences of white middle class women and overlooking the realities of those with less resources.[90]

Nearly fifty years after its passage, the Hyde Amendment continues to shape reproductive access in the United States. It did not overturn Roe v Wade, and it did not need to. By restricting Medicaid funding, Hyde redefined abortion as something that had to be purchased personally, even though it had been framed as a constitutional right. It set a precedent for how lawmakers could limit rights indirectly, though economic policy rather than outright prohibition. The Supreme Court’s decision in Maher v Roe and Harris v McRae reinforced the shift by drawing a line between the right to choose and the ability for women to exercise that right. The court insisted that poverty was a private circumstance, not something that the state was obligated to help with. This stance made economic equality seem legally neutral, even as it was falling the hardest on poor women and women of color.

The result was a stratified system in which abortion remained legal but unevenly available. Access varied dramatically by state, income level, and race, and the disparities only grew through time as clinics closed and new restrictions were passed. Lawmakers began to justify restrictions as defense of life rather than limitation on women. Additionally, the activism that emerged from groups like the National Black Women’s Health Project and SisterSong reframed abortion access as a part of a broader struggle of reproductive justice, insisting that reproductive freedom means not only the right to end a pregnancy, but also the right to raise children in safe and secure environments. This exposed what Hyde had been showing all along, that rights are only meaningful when people have the resources to act on them.

On the one hand, the Hyde Amendment demonstrated how effectively lawmakers can use economic constraints to reshape constitutional rights without actually touching their legality. This persisted for decades, influencing battles over contraception access, parental consent laws, and clinic closures. On the other hand, Hyde also helped produce a more expansive movement for reproductive freedom, one that recognized the limits of legal victories without material support. The lesson learned from Hyde is that a right that cannot be accessed is not truly a right. The law might claim neutrality in withholding federal funds, but the consequences of that “neutrality” are deeply unequal. The Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization in 2022 completed what Hyde set in motion. By allowing states to ban abortion outright, Dobbs transformed the unequal access made byHyde into legal prohibition. The patterns of racial, geographic, and economic inequality exposed by Hyde now define the post-Dobbs landscape, showing that the struggle for reproductive freedom has always been connected to the struggle for equality.

Understanding the Hyde Amendment can also help social studies teachers think about how to teach topics like constitutional rights, inequality, and the ways legal decisions affect people’s everyday lives. For high school students, it can be difficult to understand how a right can exist on paper but still be unreachable in practice. The Hyde Amendment offers a clear example of this. Looking at cases like Maher v. Roe and Harris v. McRae helps students see how the Supreme Court can acknowledge a constitutional right while also allowing policies that make that right not accessible to certain groups. This gives teachers a concrete way to help students think about the difference between what the law says and how people actually experience it, which is an important part of civic learning.

This topic is also useful for teaching about political realignment and the culture wars of the late twentieth century. Abortion was not always a purely partisan issue, and Hyde helps show students how moral, religious, and economic arguments came together to reshape politics on a national level. When teachers use primary sources like congressional testimonies, protest coverage, and presidential speeches, students can trace how different groups framed abortion and funding restrictions, and how these debates shaped the identity of the New Right. This not only builds students’ analytical skills but also shows them how public policy becomes a cultural symbol, not just a legal decision. Hyde also creates an opportunity to introduce the concept of reproductive justice, especially when teaching about movements led by women of color. Many high school students have never considered how race, class, and geography influence who can actually exercise their rights. Discussing how organizations like the National Black Women’s Health Project and later SisterSong responded to Hyde helps students see how activism grows in response to inequality. Teachers never need to take a political stance to guide students through these conversations. Instead, they can highlight how different communities understood the consequences of Hyde and why some activists argued that “choice” alone was not enough.

All in all, the Hyde Amendment is a strong example for teaching disciplinary literacy in social studies. It encourages students to read court cases closely, compare historical interpretations, analyze political speeches, and connect policy decisions to real human outcomes. Using Hyde in the classroom shows students that history is not just about memorizing events, but can also be about understanding how power operates and how policies can reshape people’s lives.

Cofiell, Trisha. “Women Protest at Hyde Dinner.” Delaware County Daily Times (Chester, PA), September 14, 1979. Newspapers.com. https://newspaperarchive.com/delaware-county-daily-times-sep-14-1979-p-1/

Daley, Steve. “Hyde Remains Constant.” Franklin News-Herald (Franklin, PA), July 14, 1993. NewspaperArchive. https://newspaperarchive.com/franklin-news-herald-jul-14-1993-p-4/.

Dubow, Sara. Ourselves Unborn : A History of the Fetus in Modern America. Oxford: Oxford University Press, 2011. https://research.ebsco.com/linkprocessor/plink?id=a4babef6-641b-3719-a368-8aa5e93e8575.

Goodwin, Michele. “Fetal Protection Laws: Moral Panic and the New Constitutional Battlefront.” California Law Review102, no. 4 (2014): 781–875. http://www.jstor.org/stable/23784354.

Gordon, Linda. The Moral Property of Women : A History of Birth Control Politics in America. 3rd ed. Urbana: University of Illinois Press, 2002. https://research.ebsco.com/linkprocessor/plink?id=ea0e3984-56df-3fca-adb6-3fc070515698.

Gunty, Susan. “THE HYDE AMENDMENT AND MEDICAID ABORTIONS.” The Forum (Section of Insurance, Negligence and Compensation Law, American Bar Association) 16, no. 4 (1981): 825–40. http://www.jstor.org/stable/25762558.

Harris v. McRae, 448 U.S. 297 (1980). https://supreme.justia.com/cases/federal/us/448/297/.

Maher v. Roe, 432 U.S. 464 (1977). https://supreme.justia.com/cases/federal/us/432/464/

Neurauter, Juliann R. “Pro-lifers Favor Hyde Amendment.” News Journal (Chicago, IL), December 7, 1977. NewspaperArchive. https://newspaperarchive.com/news-journal-dec-07-1977-p-19/

Olson, Courtney. “Finding a Right to Abortion Coverage: The PPACA, Intersectionality, and Positive Rights.” Seattle University Law Review 41 (2018): 655–690.

Perry, Rachel. “Abortion Ruling to Hit Hard Locally.” Eureka Times-Standard (Eureka, CA), August 27, 1980. NewspaperArchive. https://newspaperarchive.com/eureka-times-standard-aug-27-1980-p-9/

Petchesky, Rosalind Pollack. “Antiabortion, Antifeminism, and the Rise of the New Right.” Feminist Studies 7, no. 2 (1981): 206–46. https://doi.org/10.2307/3177522.

Reagan, Leslie J. “‘About to Meet Her Maker’: Women, Doctors, Dying Declarations, and the State’s Investigation of Abortion, Chicago, 1867-1940.” The Journal of American History 77, no. 4 (1991): 1240–64. https://doi.org/10.2307/2078261.

Reagan, Ronald. “Abortion and the Conscience of the Nation Abortion and the Conscience of the Nation.” The Catholic Lawyer the Catholic Lawyer Volume 30, no. 2 (1986). https://scholarship.law.stjohns.edu/cgi/viewcontent.cgi?article=2212&context=tcl.

Ross, Loretta. “Understanding Reproductive Justice: Transforming the Pro-Choice Movement.” Off Our Backs 36, no. 4 (2006): 14–19. http://www.jstor.org/stable/20838711.

Schoen, Johanna. Choice and Coercion : Birth Control, Sterilization, and Abortion in Public Health and Welfare. Chapel Hill: The University of North Carolina Press, 2005. https://research.ebsco.com/linkprocessor/plink?id=f8bc89c3-f4c2-36da-ba5d-809e9b26a981.

Solinger, Rickie. “‘Wake up Little Susie’: Single Pregnancy and Race in the ‘Pre-Roe v. Wade’ Era.” NWSA Journal 2, no. 4 (1990): 682–83. http://www.jstor.org/stable/4316090.

United States. Congress. House. Committee on Appropriations. Federal Funding of Abortions, 1977–1979. Washington, D.C.: U.S. Government Printing Office, 1979. Gerald R. Ford Presidential Library. https://www.fordlibrarymuseum.gov/library/document/0048/004800738repro.pdf

Vinovskis, Maris A. “The Politics of Abortion in the House of Representatives in 1976.” Michigan Law Review 77, no. 7 (1979): 1790–1827. https://doi.org/10.2307/1288043.


[1] Reagan, Leslie J. “‘About to Meet Her Maker’: Women, Doctors, Dying Declarations, and the State’s Investigation of Abortion, Chicago, 1867-1940.” The Journal of American History 77, no. 4 (1991): 1240–64. https://doi.org/10.2307/2078261.

[2] Reagan 1245

[3] Solinger, Rickie. “‘Wake up Little Susie’: Single Pregnancy and Race in the ‘Pre-Roe v. Wade’ Era.” NWSA Journal 2, no. 4 (1990): 682–83. http://www.jstor.org/stable/4316090.

[4] Gunty, Susan. “THE HYDE AMENDMENT AND MEDICAID ABORTIONS.” The Forum (Section of Insurance, Negligence and Compensation Law, American Bar Association) 16, no. 4 (1981): 825. http://www.jstor.org/stable/25762558.

[5] Vinovskis, Maris A. “The Politics of Abortion in the House of Representatives in 1976.” Michigan Law Review 77, no. 7 (1979). https://doi.org/10.2307/1288043.

[6] Vinovskis 1801

[7] Harris v. McRae, 448 U.S. 297 (1980) https://supreme.justia.com/cases/federal/us/448/297/

[8] Dubow, Sara. Ourselves Unborn : A History of the Fetus in Modern America. Oxford: Oxford University Press, 2011. https://research.ebsco.com/linkprocessor/plink?id=a4babef6-641b-3719-a368-8aa5e93e8575.

[9] Gordon, Linda. The Moral Property of Women : A History of Birth Control Politics in America. 3rd ed. Urbana: University of Illinois Press, 2002. https://research.ebsco.com/linkprocessor/plink?id=ea0e3984-56df-3fca-adb6-3fc070515698.

[10] Gordon 29-34

[11]  Schoen, Johanna. Choice and Coercion : Birth Control, Sterilization, and Abortion in Public Health and Welfare. Chapel Hill: The University of North Carolina Press, 2005. https://research.ebsco.com/linkprocessor/plink?id=f8bc89c3-f4c2-36da-ba5d-809e9b26a981.

[12] Goodwin, Michele. “Fetal Protection Laws: Moral Panic and the New Constitutional Battlefront.” California Law Review102, no. 4 (2014): 781–875. http://www.jstor.org/stable/23784354.

[13] Vinovskis 1793-1796

[14] Dubow 147-155

[15] Petchesky, Rosalind Pollack. “Antiabortion, Antifeminism, and the Rise of the New Right.” Feminist Studies 7, no. 2 (1981): 206–46. https://doi.org/10.2307/3177522.

[16] Ross, Loretta. “Understanding Reproductive Justice: Transforming the Pro-Choice Movement.” Off Our Backs 36, no. 4 (2006): 14–19. http://www.jstor.org/stable/20838711.

[17] Vinovskis 1818

[18] Vinovskis 1794

[19] Gunty 837

[20] Vinovskis 1812

[21] Vinovskis 1801

[22] Gunty 835

[23] Petchesky 120

[24] Schoen, Johanna. Choice and Coercion : Birth Control, Sterilization, and Abortion in Public Health and Welfare. Chapel Hill: The University of North Carolina Press, 2005. https://research.ebsco.com/linkprocessor/plink?id=f8bc89c3-f4c2-36da-ba5d-809e9b26a981.

[25] Olson, Courtney. “Finding a Right to Abortion Coverage: The PPACA, Intersectionality, and Positive Rights.” Seattle University Law Review 41 (2018): 655

[26] Gunty 831

[27] Vinovskis 1811

[28] United States. Congress. House. Committee on Appropriations. Federal Funding of Abortions, 1977–1979. Washington, D.C.: U.S. Government Printing Office, 1979. Gerald R. Ford Presidential Library. https://www.fordlibrarymuseum.gov/library/document/0048/004800738repro.pdf

[29] Neurauter, Juliann R. “Pro-lifers Favor Hyde Amendment.” News Journal (Chicago, IL), December 7, 1977. NewspaperArchive.

[30] Schoen 3-11

[31] Vinovskis 1793

[32] Gunty 826

[33] Gunty 825

[34] Gunty 825

[35] Schoen 5

[36] Schoen 5

[37] Gunty 834

[38] Gunty 836

[39] Maher v. Roe, 432 U.S. 464 (1977). https://supreme.justia.com/cases/federal/us/432/464/

[40] Maher v. Roe

[41] Maher v. Roe

[42] Harris v. McRae

[43] Harris v. McRae

[44] Harris v. McRae

[45] Perry, Rachel. “Abortion Ruling to Hit Hard Locally.” Eureka Times-Standard (Eureka, CA), August 27, 1980. NewspaperArchive.

[46] Goodwin, Michele. “Fetal Protection Laws: Moral Panic and the New Constitutional Battlefront.” California Law Review102, no. 4 (2014): 781–875. http://www.jstor.org/stable/23784354.

[47] Gunty 834

[48] Harris v. McRae

[49] Schoen 147

[50] Gordon 340

[51] Gunty 828

[52] Schoen 225

[53] Schoen 140

[54] Schoen 24

[55] Schoen 5

[56] Schoen 5

[57] Schoen 149

[58] Schoen 32

[59] Vinovskis 1818

[60] Petchesky 216-17

[61] Dubow 162

[62] Dubow 7

[63] Reagan, Ronald. “Abortion and the Conscience of the Nation Abortion and the Conscience of the Nation.” The Catholic Lawyer the Catholic Lawyer Volume 30, no. 2 (1986). https://scholarship.law.stjohns.edu/cgi/viewcontent.cgi?article=2212&context=tcl.

[64] Dubow 154

[65] Vinovskis 1801

[66] Dubow 165

[67] Vinovskis 1795

[68] Vinovskis 1809

[69] Daley, Steve. “Hyde Remains Constant.” Franklin News-Herald (Franklin, PA), July 14, 1993. NewspaperArchive. https://newspaperarchive.com/franklin-news-herald-jul-14-1993-p-4/.

[70] Schoen 5

[71] Gordon 311-323

[72] Gordon 316

[73] Schoen 52

[74] Gunty 827-829

[75] Schoen 74

[76] Dubow 159

[77] Trisha Cofiell, “Women Protest at Hyde Dinner,” Delaware County Daily Times (Chester, PA), September 14, 1979, 1, Newspapers.com.

[78] Cofiell 1

[79] Cofiell 1

[80] Schoen 11

[81] Goodwin, Michele 818

[82] Ross, Loretta. “Understanding Reproductive Justice: Transforming the Pro-Choice Movement.” Off Our Backs 36, no. 4 (2006): 14–19. http://www.jstor.org/stable/20838711.

[83] Schoen 6

[84] Gordon 339

[85] Ross 14-15

[86] Goodwin 785

[87] Ross 14-16

[88] Ross 17

[89] Goodwin 857

[90] Gordon 339

Beyond the Box Score

October 15th, 1923. John McGraw’s New York Giants versus Miller Huggins New York Yankees in game six of the World Series. At the beginning of the Yankees season, The House That Ruth Built was opened to the public in April of that year. Babe Ruth opened the stadium and set the tone for that season by hitting three home runs along with eight walks. That tone stayed up until the day at the Polo Grounds stadium in Upper Manhattan where McGraw’s dream of three straight championships in a row was crushed. Allowing the New York Yankees to win their very first World Series championship.

The Yankees winning the World Series was the very first article on the front page of this New York Times article which claims that this game six was very intense and had many back-and-forth moments between the Giants and the Yankees throughout. Both teams also have at least one key player that had a large impact on the game, for the Yankees, Babe Ruth of course, and for the Giants, it was their pitcher Art Nehf. As the author of this article calls him, “the last hope of the old guard,”[1] had only allowed two hits in the first seven frames and allowed one home run from Ruth. Nehf had been too powerful against the Yankee hitters with his great speed and side-breaking curve made it from the third inning to the eighth the Yankees went hitless. While also being three runs behind and the Yanks getting no love from the crowd in the Giant’s home stadium, the situation was looking grim for Huggins and his team.

When the eighth inning hit, things still seemed to be looking good for Nehf, but during the second pitch of the inning is when the tide started to turn. The ball flew close to Walter Schang’s ear, he tried to move and ended up hitting the ball to third base for a single. After this hit, two more Yankee players hit and were able to get Schang home to only put them two behind the Giants. According to the article, “Nehf’s face turned as white as a sheet,”2 something happened to him after the few hits he gave away and he couldn’t continue. Bill Ryan, the backup pitcher, came in to try and salvage what we could from the wreckage that Nehf left. Ryan started pretty well and almost made it out of the inning until Bob Meusel hit the ball slightly to the right of Ryan into center field. Three runs were scored on that hit, five for the inning making the World Series almost over at that point.

With that eighth-inning rally, the Yankees were able to put the game away and win their very first World Series Championship. 

The next big article that is on the front page of this New York Times article is that hungry mobs raid Berlin bakeries. At this time in 1923, five years ago, Germany had just lost World War One and was facing some pretty terrible consequences from the Allied powers. One of these consequences was that Germany was not doing a good job paying their war reparations to the French and therefore decided to occupy the Ruhr district. This area was known for having many raw materials that the French would take for themselves as payment for German war debts. In these articles in the New York Times, it is fascinating to see the differences in rioting in German cities like Berlin and Frankfurt versus French-occupied ones such as Neustadt and Düsseldorf.  The first half of this article talks about Berlin and Frankfurt which were two cities that were still controlled by the German government but were wrecked by inflation of bread prices. This was because the government decided to print more money to have enough for their war debts. The problem with printing more money is that it creates more physical currency, but decreases its value. After the government did this, the value of the German mark went to almost no value, and prices of bread skyrocketed. This article says that “5000 demonstrators, mostly unemployed men, reinforced by women with market baskets… marching to the Rathaus and making demands upon the authorities… The police reserves were called and drove demonstrators away.”[2] Inflation wrecked the economy so badly that the German people were unable to afford for their families and protested in the capital city to show the disarray of the German state. 

The second half of the article talks about the cities of Neustadt and Düsseldorf, two cities that were occupying the territory as stipulation states in the Treaty of Versailles. In Neustadt, crowds of unemployed people were attempting to raid a post office that was reported to be holding currency inside of it. French authorities were sent out to break up the crowd. In Düsseldorf, communists and nationalists were working together to foment trouble in the Ruhr district. In the article, the author states one key difference between the riots in this city compared to Berlin. “According to a statement made this morning the movement is political rather than economic. It was aimed against Chancellor Stresemann (German foreign minister) on the one hand and against the French on the other.”4 These people were not rioting because they didn’t have enough food, these people hated the fact that they were being ruled by a foreign power. I found this section of the newspaper very interesting because knowing what happened later on with Hitler coming to power, the German people despised the Treaty of Versailles and were willing to shift political extremes to get rid of it.

There are sections in this article commenting on the rising poor conditions of the German government during the 1920s. This article is from the perspective of Reed Smoot, Chairman of the Senate Finance Committee. Called at the White House to tell the president about the conclusions reached after his recent trip to Europe. 

            After his trip, the senator had some definite opinions on the Americans revisiting the appointment of the Hughes proposal to determine the ability of Germany to pay their reparations from the war. This plan was an idea that the International Commission should fix the amount of money that the Germans would have to pay back. Smoot wanted all countries in the commission to agree on this plan and was expecting the French to back down on their reparation demands. To be fair, most of World War I was fought on French territory in the northern regions of the country needing these reparations fo rebuilding. 

            The Senator knows that France will most likely not agree with this arrangement but is scared about the future of Europe. He said to the president, “Unless something was done quickly, there was danger of an outbreak which might involve all of Europe.”[3] Too bad that Smoot was right about this and nothing was done with this issue. It is the very reason that the Allies did not relax reparations and kept demanding from a destroyed Germany that Hitler was able to become Chancellor a decade later.

The next big headline of this New York Times newspaper article comes to the news in the United States. This headline was about a conference of drys calling for President Calvin Coolidge to take action against the people who were breaking rules on the prohibition. The counsel of the drys or people who were against liquor consumption saw the amount of people who were smuggling illegal booze by sea and wanted them to stop doing this. They wanted the president and the American people to uphold the Eighteenth Amendment.              Smuggling liquor by sea was one of many alternatives that citizens were finding to get around Prohibition in the 1920s. Rum Row was the name of a naval liquor market along the East Coast that was just beyond the American maritime limit where transactions of alcohol were made. Bootleggers, or people who engaged in the illegal sale of alcohol, would just have to sail out to this region in a small boat to pick up shipments of liquor to resell back in the States.  The last small section of this article is direct quotes from the president calling for legislators to abide by the laws and punish people who are breaking the laws of the Constitution.

He says, “The State or Federal Constitution should resign his office and give place to one who will neither violate his oath nor betray the confidence of the people.”[4] Some corrupt politicians were becoming bootleggers themselves or were not punishing people who were breaking the law, which is why the president had to make this statement to these legislators. Coolidge ends his statement by saying, “Lawmakers should not be lawbreakers.”7

There is another section farther in the New York Times article that is from the perspective of another Representative traveling to another country to report on the country they are traveling to. In this case, it is Fred A. Britten of Illinois returning from his visit to Russia having changed his mind on the recognition of the Soviet government. Much like Reed Smoots, Britten called upon the president to give his reports and experience after being in the new Soviet Union for some time.

            Unsurprisingly, the representative started his report to the president by saying, “The Soviet regime was a visionary Government whose very foundation is baked on murder, anarchy, Bolshevism and theft.”[5] Knowing when this article was written and being three years past the first Red Scare in the United States, one could only imagine his thoughts on the regime in Russia. Many states in the US around the early 1920s were outlawing advocacy of violence in attempting to secure social changes and most people suspected of being communist or left-wing were jailed. Another thing to mention is that this first Red Scare did not distinguish between Communism, Socialism, Social Democracy, or anarchism and all were deemed as a threat against the nation. 

            Britten mentions that he “traveled unofficially, sought no favors, and tried to see the good side of that tremendous political theory which is now holding 150,000,000 people in subjection.”[6] It is debatable whether he was trying to see the good side of Russia or not. He also talks about the major difference in how religion is treated in Russia. Atheism is what was primarily taught in the Soviet Union because religion was seen as a bourgeois institution whose only goal was to make money off of followers. Britten mentions some signs that he saw, one by the entrance to the Kremlin Palace that read, “Religion is the opium of the State,”[7] and another one that said, “Religion is the  tool of the rich to oppress the poor.”11 Communism is very different from capitalism which is why two different Red Scares happened in the United States to protect itself from an ideology that was very different from its own. 

            The prompt for this paper was to find a significant baseball box score from the 1900s of our choosing. I selected the Yankees’ first-ever World Series win against the New York Giants, using the Historic New York Times Database. We were then instructed to examine the other articles published in that same newspaper issue. For example, I focused on reports of hunger strikes in Berlin, which were driven by the collapse of the German mark and soaring bread prices after World War I. This was the first major assignment of the class, designed to help us begin developing primary source research and analysis skills, an essential foundation for any history course.

Teachers don’t have to limit this to a baseball history lesson; it can easily be adapted to focus on any major topic in U.S. history from the 1900s and beyond. Students can begin with a key event as the entry point for their primary source research. Then, they can expand their analysis by identifying and writing about other events covered in the same newspaper issue, painting a fuller picture of what was happening in the U.S. during the chosen time period. This strategy not only sharpens students’ analytical skills but also broadens their understanding of how historical events overlap and influence one another, helping them grasp the interconnectedness of social, political, and cultural developments within a given era.

“Britten Opposes Soviet Recognition.” New York Times (1923-), Oct 16, 1923: Page 5  https://login.tcnj.idm.oclc.org/login?url=https://www.proquest.com/historical-newspapers /yanks-win-title-6-4-victory-ends-1-063-815-series/docview/103153313/se-2.  

“Conference of Drys Calls on Coolidge For Drastic Action.” New York Times (1923-), Oct 16, 1923: Page 1.

“Hungry Mobs Raid Berlin Bakeries.” New York Times (1923-), Oct 16, 1923: Page 1.

Oversimplified. “Prohibition – OverSimplified.” YouTube video, December 15th, 2020. 

“Smoot and Burton See Peril In Europe.” New York Times (1923-), Oct 16, 1923: Page 3.

“Yanks Win Title; 6-4 Victory Ends $1,063,815 Series.” New York Times (1923-), Oct 16, 1923:  Page 1.


[1] “Yanks Win Title; 6-4 Victory Ends $1,063,815 Series,” New York Times (1923): 1. 2 “Yanks Win Title,” 1.

[2] “Hungry Mobs Raid Berlin Bakeries,” New York Times (1923): 1. 4 “Hungry Mobs Raid,” 1.

[3] “Smoot and Burton See Peril In Europe.” New York Times (1923): 3.

[4] “Conference of Drys Calls on Coolidge For Drastic Action,” New York Times (1923): 1. 7 “Conference of Drys,” 1.

[5] “Britten Opposes Soviet Recognition,” New York Times (1923): 5.

[6] “Britten Opposes Soviet,” 5.