The Devastating Effects of the Great Leap Forward

Right after the end of the Second World War, there was a new issue that took center stage that would essentially divide the entire world in half for the next several decades, that being the rise and spread of communism. Initially starting during the Russian Revolution in 1917, communism was starting to spread throughout the world due to the expanding influences of socialist ideologies that were turning many civilizations into communist states either under or at the very least inspired by the Soviet Union. Many other countries began seeing their own revolutions that would lead to a rebirth or major change within their government system, with one such example being China becoming a communist nation in 1949. The man who single handedly led the people of China into a new era in Chinese history and would become their new leader was Mao Zedong. During this time in the world, the cold war was in full effect with many countries not only falling to communism, but also the race to advance a nation’s status among the world. Mao Zedong saw that China had the full potential to grow stronger and faster in their economy, resources, and military. Starting in 1958, Mao Zedong would launch the Great Leap Forward, a movement that would focus on improving China’s stature as fast as possible to catch up with other global powers such as the Soviet Union and the United States. However, Mao’s ambitious methods and dedication to rapidly increasing production and change in China would majorly backfire. It isn’t a disputed claim that the Great Leap Forward did not work and was in fact a major failure under Mao Zedong’s leadership, but how bad were the repercussions from the Great Leap Forward? This paper will be discussing the extent of the failures and cost of human lives caused by the Great Leap Forward.

            The early stages of the Cold War consisted of the biggest, most powerful nations during that time displaying their strength, alliances, power, and influence over the world. One side of the conflict was the United States, which had significant military strength, government leadership, and made it their goal to get involved when necessary to prevent other countries from falling to communism. On the other side of the spectrum was the Soviet Union, who held control over nearly half of Europe (particularly the nations who were formerly occupied by the Axis powers during World War II), and was starting to spread their influences throughout several parts of Asia, including China. The leader of the newly founded People’s Republic of China, Mao Zedong, took notice of how fast the Soviet Union was able to rapidly catch up to the world, and that it was one of the biggest reasons towards what led the U.S.S.R. to be seen as major and powerful threats towards the rest of the world.

In the article Demographic Consequences of the Great Leap Forward in China’s Provinces by Xizhe Peng, Mao’s ambition to replicate what was done just earlier under Stalin’s five year plans is what would inspire his decision to speed up production throughout the country’s systems in order to quickly reach the level of and even outperform other countries1. “the late Chairman Mao Zedong proposed the goal for China of overtaking Great Britain in industrial production within 15 years…The general line of the Party that guided the Great Leap Forward was ‘Going all out, aiming high and achieving greater, faster, better, and more economical results in building socialism’” (Peng)1. Beginning in 1958, China wanted to reach certain levels of production in which Mao Zedong would see as great improvements for China in building strength within resources, such as industrializing faster in order to catch up on steel production in order to provide more tools, resources, and military equipment. Nearly all citizens would be put to work in order to help contribute towards the bigger collection, and while in practice this may seem like a good idea, there would only be problems that quickly emerged which eventually lead bad situations to catastrophic failures. 

            Poor decisions, bad thought processes, and poor actions that were made by Chairman Mao Zedong would heavily damage his own society and would be the somewhat direct cause of the deaths of millions of people. In the article Dealing with Responsibility for the Great Leap Famine in the People’s Republic of China by Felix Wemheuer, it discusses about who or what the Chinese communist party blamed for the disastrous results that the Great Leap Forward caused in the rise of famine and deaths throughout China, and many felt that Mao Zedong himself was solely responsible.2 For a short while, Mao Zedong was so stubborn that he refused to accept responsibility for what he caused to happen throughout China, instead wanting to blame other elements. However, due to pressure from his party and the massive amount of devastation that was now throughout China due to the failure of wanting to mass produce, Mao Zedong would eventually take some of the blame.

            The rapid growth that the Soviet Union was able to accomplish in just a short amount of time was a remarkable feat. The Soviet Union succeeded in becoming the industrial powerhouse that they were in the mid-20th century, and it was an impressive achievement for showing how any country can shift their goals and, within a short time period, can grow in the eyes of the world in terms of strength and power. In the period of world history where many countries were racing in the growth of their industry, military, and their level of dominance in the world, Mao Zedong was looking to use, explore, and expand upon similar strategies in order for China to join the arms race and to be seen as a powerful contender. Mao Zedong was clearly trying to follow in their footsteps in rapidly increasing their resources and financial stock, but just as how the Russians suffered through major push-back, the people of China would face similar, yet even greater push-back towards their economy. The article Causes, Consequences and Impact of the Great Leap Forward in China by Hsiung-Shen Jung and Jui-Lung Chen describes the detrimental damage the Great Leap Forward caused to China’s economy3. “After the Great Leap Forward, it took five years to adjust the national economy before it was restored to the 1957 level… economic losses of up to RMB 120 billion” (Hsiung-Shen and Jui-Lung)3. The nation was put under tremendous debt due to the poor planning and even worse results caused by Mao Zedong during the period of the Great Leap Forward, and to top it off, Mao’s stubbornness prevented him from taking any responsibility. Mao would even go on to make claims to purposely lead the people of China’s frustrations towards something else. It is stated within Hsiung-Shen Jung and Jui-Lung Chen’s article that “Mao remained reluctant to fully acknowledge the mistakes of the Great Leap Forward… he proposed the Party’s fundamental approach in the socialist stage, followed by a left-wing socialist educational campaign aimed at cracking down on the capitalist roaders,” (Hsiung-Shen and Jui-Lung)3. Just as Mao spread his ideologies and political messages throughout China to the people, he responded to the major hardship of a failed experiment he caused by trying to shift the blame onto those with the opposite economic and business philosophies of the Chinese Communist Party. The main cause of the detrimental shape of China’s economy due to major loss in food production, labor, and the loss of people’s lives was caused pushing the country too hard and too fast in Mao’s egotistical push for China to change and grow faster rather than taking his time for proper developmental growth and a fair distribution of the wealth, food, and supplies to his own citizens.

            The famine caused by the Great Leap Forward is one of just a few of the most infamous famines throughout history, such as the notorious Irish potato famine of the 19th century that killed over a million people. The total death toll of the famine caused in China during the Great Leap Forward was in the tens of millions, and as the article Mortality consequences of the 1959-1961 Great Leap Forward famine in China: Debilitation, selection, and mortality crossovers by Shige Song describes famines, “Famine is a catastrophic event” (Song)4.

This same article goes into a research study done by the author who has not only compromised data from the mortality rate and statistics during the Chinese famine, but also how it had such negative repercussions for the people and birth rates afterwards, such as a graph that shows the probability of survival decreasing4. The declining rate of survival not only affected very young kids and teens, but was affecting people years after the famine was over. The distribution of food supplies and decreasing amount of crops successfully growing made such a major dent in the health and lifespan of the average citizen in China, and that the famine itself began so quickly and rapidly within a short period of time. The Great Leap Forward only lasted for a few years, but its severe damages caused upon China would cause the people of China to continue to suffer for the following years to come.

            When thinking about how to measure the severity of an event or period of time, one may look at the total number of people that died who were directly linked to the occurrence. While this is certainly a fully reasonable statistic to use, in the case of a famine where the main cause of death is starvation, it can create the question of how much of a difference in food output really was there? The article The Great Leap Forward: Anatomy of a Central Planning Disaster by Wei Li and Dennis Tao Yang goes into many exact pieces of data and statistics regarding the output of grain being grown, the number of workers, and other elements of farm production5.

The Great Leap Forward lasted from 1958-1962, and within Li and Tao Yang’s grain output table in China, it shows that the total grain output during the years of the Great Leap Forward decreased by almost 100 million tons of grain, which is a loss of almost half of the total grain output just before the Great Leap Forward5. During this same time range, there was a noticeable decrease in workers, presumably dying due to the famine and harsh labor they were being put through. However, there was also an increase in both farm machinery and chemical fertilizer which would rapidly increase more in the years after the Great Leap Forward. Now while this can be considered a small victory for Mao’s intent on rapidly increasing and modernizing China’s agriculture, it did come at the major cost of both a famine, a decrease in crops being grown, and the loss of many Chinese farmers. The advanced farming tools, machinery, and techniques that did come from the Great Leap Forward still came at a major cost for the people and economy of China.

            While farming and grain production was a very big part in the overall progression of China’s resources, it wasn’t the only thing that Mao Zedong was trying to rapidly change and try to improve in order to make China a more powerful country. For most of history, China was primarily an agricultural society, but in the turn of the 20th century, many countries were beginning to not only industrialize in materials, resources, and military, but they were doing so at a very fast rate. The production of steel in China was to be taken much more seriously in order for China to catch up with the other world powers in terms of strength in industrialized resources, but just like with the negative consequences of rapidly changing grain production, Mao’s attempt to reform steel production in China also came with its own tolls. Going back to Wei Li and Dennis Tao Yang’s article The Great Leap Forward: Anatomy of a Central Planning Disaster, there is a statistics table done on the steel production and output in China during this time period, and it shows how big of a jump there was in steel and iron output within a very short amount of time5. China was able to triple their steel and iron output during the years of the Great Leap Forward, and the number of production units increased from tens of households to over two thousand households in just a few years5. However, during this same time gap, the number of provinces that allowed its people to have exit rights quickly went down as more and more provinces were quickly taking away rights from its own workers. Also, in the years after the Great Leap Forward, the output of steel and the number of production units would decrease by a noticeable amount, showing that it was only just a very short term benefit with major consequences5. This shows how quick, rapid, and big changes in the production of any resource within a country is not good for the other elements of that country, such as human rights and households with either food or enough materials and resources.

            The rapid increase in the demand for more food and a faster input of the growth of crops was not good in the long run for the people themselves, since it would cause a famine and leave millions upon millions of people to starve to death. Starvation is already a major issue for the population of one of the most populous countries in the world, but not only were the Chinese people affected negatively by the Great Leap Forward’s farming strategies, but the ground itself was severely damaged by the rapid changes and increased activity in China. The article Terrain Ruggedness and Limits of Political Repression: Evidence from China’s Great Leap Forward and Famine (1959–61) by Elizabeth Gooch explains how Mao’s farming campaign during the Great Leap Forward not only increased the mortality rate, but also damaged the dirt and soil of China6. There are statistics and graphs put together by Elizbeth Gooch in her article showing how because of the Great Leap Forward, there was an increased number in the amount of rugged terrain due to a vast increase of production, manufacturing and pollution that were caused by the Great Leap Forward6. A lot of the natural dirt, soil, and nutrients found within the farming grounds used for growing crops, plants, and foods were now blighted by the overproduction going on throughout China, and that there are even parallels between the death rate and the rate of soil becoming rugged. Mao Zedong wanted grain production, along with the production of other resources, to keep increasing, but due to his plans being executed in poor fashion and horrendous results, he was causing so much harm and damage towards the people of China and to China’s natural environment.

The number of crops being harvested is down, the natural land of China is dwindling, and there is a famine that has taken the lives of millions of people, but there’s a chance that this was all worth it in the long run for the growth and prosperity of China. The main purpose of Mao Zedong’s Great Leap Forward was for China to catch up with the other fully developed and powerful countries, and one of the biggest factors that can help with that is having an efficient, well running, and strong industrial production system. Ever since the Industrial Revolution began back in the 19th century, civilizations one by one have moved forward with their main economic resource production with the building of many factories that produced metal, steel, and other materials. This was also one of the biggest things to come out of the Soviet Union’s rapid growth in power in the early 20th century, and it was the strong industrial powerhouse that Joseph Stalin achieved for his country that Mao Zedong wanted to implement for China. Returning to Elizabeth Gooch’s Terrain Ruggedness and Limits of Political Repression: Evidence from China’s Great Leap Forward and Famine (1959–61), the growth of industrialization within China was perhaps one of the biggest accomplishments in the Great Leap Forward6. As the line graphs in Gooch’s article shows, industry increased by a very large amount during the years of the Great Leap Forward, although agriculture took a slight decrease during that same time frame, most likely due to many of the farmers being forced to work in the newly made factories and steel producing areas6. However, while looking at the rates of birth, growth, and death during these same few years, it becomes clear that the success of rapid Chinese industrialization came at the expense of the people themselves. The birth and growth rate took a big decrease during this time, and the rate of death tremendously increased6. While China did greatly benefit from the growth of industry and metal production, it was done at the cost of the health and safety of the people, along with attention being shifted away from agriculture and polluting the land.

Besides the main elements of the Great Leap Forward that were seen as major problems for the people of China, such as grain, steel, food, and other resources, there was also another very important element that is crucial for the survival of people and civilizations: water. In the Great Leap Forward, there were also campaigns for the industrial working, usage, and processing of water that in itself would cause even more issues for China. In the article The Great Leap Forward (1958-1961) Historical events and causes of one of the biggest tragedies in People’s Republic of China’s history by Adriana Palese, it describes the effects of the increase of water conservation projects from 25 million to 100 million, “inhuman working hours”, and that the the projects themselves weren’t a success with a cost at the expense of the people of China, as “most were useless and caused disasters some years after and other projects were simply abandoned and left uncompleted” (Palese)7. While there is mention of a decrease in flooding, this is once again an example of the many campaigns launched by Mao Zedong to improve and advance China with rapid industrialization, it did not at all work for the benefit of the people of China as a whole since the vast majority of people would suffer from this, along with the other failed campaigns during the Great Leap Forward.

While rapidly increasing the production of everything in China may be seen as good in concept, not only would it very negatively harm the people and the society of China, but sometimes these bold campaigns would actually make these situations worse than they were before. In Adriana Palese’s The Great Leap Forward (1958-1961) Historical events and causes of one of the biggest tragedies in People’s Republic of China’s history, she writes that “there were total shortages of other foods and other products such as cooking oil, sugar, thermos bottles, porcelain dishes, glasses, shoes, etc” (Palese)7. Not only could less food be made due to the dwindling number of crops being grown and an ongoing famine, but the manufactured goods of simple tools and supplies were faxing a big shortage and that it seems like the simple transactional market based economy of China for all goods and products was collapsing. Palese’s article even includes the wide percentage decrease in the output of agriculture and industrial goods that were happening during this time period7. The Great Leap Forward was rapidly deteriorating all elements that make up Chinese society, their economy, public morale, and way of life.

During one of the most crucial parts of the Great Leap Forward, Mao Zedong aimed to improve and increase the farming of grain since it was still seen as a very important part in actually feeding the population. However, a common enemy to the growth of any crops in a farming society is bugs, pests, and other insects since they can eat away at the growing crops. Mao Zedong had his own solution to this problem. In the article China’s deadly science lesson: How an ill-conceived campaign against sparrows contributed to one of the worst famines in history by Jemimah Steinfeld, “As part of the Four Pests campaign – a hygiene campaign against flies, mosquitoes, rats and sparrows – people were called upon to shoot sparrows, destroy their nests and bang pots and pans until the birds died of exhaustion” (Steinfeld)8. Anyone in China, men, women, and children were able to participate in the killing/removal of these target pests. While there were minor victories removing these pests, it overall came at a serious cost. One of these so called pests, the sparrows, were removed from the China’s agricultural society, but they were responsible for keep an even bigger threat towards crops away, locusts.8 Even after Mao Zedong had stop the killing of sparrows, the damage has already been dead, as this was one of the biggest reasons in what led to the famine spreading so rapidly and quickly through China, causing the deaths of millions of people in just a few short years.8 This was seen as why no matter the circumstances or beliefs, the ecosystem of any land should never be altered or drastically changed for the human need, since removing living creatures from their natural habitat and cycle would cause such a direct correlation between the farming/pest campaign to the millions of deaths caused by famine.

In conclusion, while the Great Leap Forward was initially seen as a progressive strategy to quickly advance Chinese society, it ultimately resulted in failure. Millions of people would die due to starvation caused by mass famines throughout the vast farmland of China. Many farmers were taken from their fields and forced to work in industrial yards in order to catch up on steel and metal resources for China. Mao Zedong was so blinded by the result of other nation’s rapid industrialization that he ignored what negative consequences can come of it, only this time China would suffer greater than any country has suffered before with little to nothing to show for it. Mao Zedong’s attempt in advancing China only set back the country, reduced morale and reduced support from his own party. The Great Leap Forward will go down in history as one of the most devastating eras in Chinese history due to the major count of the loss of life and how one of the oldest and culture rich societies in the world nearly destroyed themselves over ambitious goals due to the global affairs in the Cold War.

Endnotes

  1. Peng, Xizhe. “Demographic Consequences of the Great Leap Forward in China’s Provinces.” The China Quarterly 159 (1999): 430-453.
  2. Wemheuer, Felix. “Dealing with Responsibility for the Great Leap Famine in the People’s Republic of China.” The China Quarterly 216 (2013): 402-423.
  3. Jung, Hsiung-Shen, and Jui-Lung Chen. “Causes, Consequences and Impact of the Great Leap Forward in China.” Asian Culture and History 11, no. 2 (2019): 61–70.
  4. Song, Shige. “Mortality Consequences of the 1959–1961 Great Leap Forward Famine in China: Debilitation, Selection, and Mortality Crossovers.” Social Science & Medicine 71, no. 3 (2010): 551–558.
  5. Li, Wei, and Dennis Tao Yang. “The Great Leap Forward: Anatomy of a Central Planning Disaster.” Journal of Political Economy 113, no. 4 (2005): 840–77.
  6. Gooch, Elizabeth. “Terrain Ruggedness and Limits of Political Repression: Evidence from China’s Great Leap Forward and Famine (1959–61).” Journal of Comparative Economics 47, no. 4 (2019): 699–718.
  7. Palese, Adriana. The Great Leap Forward (1958–1961): Historical Events and Causes of One of the Biggest Tragedies in People’s Republic of China’s History. Bachelor’s thesis, Lund University, 2009.
  8. Steinfeld, Jemimah. “China’s Deadly Science Lesson: How an Ill-Conceived Campaign Against Sparrows Contributed to One of the Worst Famines in History.” Index on Censorship 47, no. 3 (September 2018): 6–8.

Jung, Hsiung-Shen, and Jui-Lung Chen. “Causes, Consequences and Impact of the Great Leap Forward in China.” Asian Culture and History 11, no. 2 (2019): 61–70.

Gooch, Elizabeth. “Terrain Ruggedness and Limits of Political Repression: Evidence from China’s Great Leap Forward and Famine (1959–61).” Journal of Comparative Economics 47, no. 4 (2019): 699–718.

Li, Wei, and Dennis Tao Yang. “The Great Leap Forward: Anatomy of a Central Planning Disaster.” Journal of Political Economy 113, no. 4 (2005): 840–77.

Palese, Adriana. The Great Leap Forward (1958–1961): Historical Events and Causes of One of the Biggest Tragedies in People’s Republic of China’s History. Bachelor’s thesis, Lund University, 2009.

Peng, Xizhe. “Demographic Consequences of the Great Leap Forward in China’s Provinces.” The China Quarterly 159 (1999): 430-453.

Song, Shige. “Mortality Consequences of the 1959–1961 Great Leap Forward Famine in China: Debilitation, Selection, and Mortality Crossovers.” Social Science & Medicine 71, no. 3 (2010): 551–558.

Steinfeld, Jemimah. “China’s Deadly Science Lesson: How an Ill-Conceived Campaign Against Sparrows Contributed to One of the Worst Famines in History.” Index on Censorship 47, no. 3 (September 2018): 6–8.

Wemheuer, Felix. “Dealing with Responsibility for the Great Leap Famine in the People’s Republic of China.” The China Quarterly 216 (2013): 402-423.

How Perot’s Economic Populism Nearly Broke the 2-Party System

The 1990s in America were a very impactful time in the country, both through pop culture, we had the World Wide Web coming into play, TV shows like Friends and Seinfeld, and Grunge music was taking off. However, we must not forget that America was impacted politically during the 1990s as well; we had the L.A Riots, the trial of O.J. Simpson, but arguably the most important, Ross Perot and his political antics of the 1990s,  and how he almost broke the two-party political system that had been in place for over 130 years at the time.

Ross Perot was a complete outsider politician who was primarily active in the 1990s in the United States as running for president twice in 1992 and 1996 with no prior office experience in running beforehand. “However, the election was to be complicated by a third-party bid from Ross Perot. Despite winning 19 million votes in the 1992 election, the maverick Texan aroused little public enthusiasm this time, but opinion polls nevertheless suggested that he could get more than 10 per cent of the national vote.[1]” Ross Perot ran as a political outsider rather than running as an independent candidate in 1992 and under his newly created political party called the Reform Party in 1996 which he received roughly 19% in 92 and 8.5% in 96. He was the first politician at the time to win such a high percentage of the vote for nearly 80 years as an independent or minor political party candidate. “Against most predictions, 19 percent of the vote went to Ross Perot, the best result for a candidate since Teddy Roosevelt.[2]” The election of 1992 was the highest percentage of a third-party candidate since 1912 to when Theodore Roosevelt received nearly 27% of the popular vote and won 6 states and 88 electoral votes. 

What, then, was the exact reason in the first place for Ross Perot? Why did he even run as a candidate in the first place? Ross Perot advocated for a contract with Americans which advocated his main political stances. “The Contract emphasized the Perot balanced issues of a Balanced federal budget, reform, and limiting American commitment to internationalism.[3]” So, with Perot’s basic policies in place and with both of his attempts to run for president in the books as failures in the long run, Perot’s attempts for running for president was a near break of the American two-party system that has only elected either a Democrat or Republican as the president of the United States since the election of President Millard Fillmore as a member of the Whig Party which was seen as a Proto-Republican party which competed with the Democratic party before their disbandment,  which had him win the presidential election of 1850. So, how exactly was Ross Perot able to achieve such great attempts to almost break the American political system that has been in place for nearly 150 years? The answer to this question was Perot’s outsider stance of economic populism that nearly broke the system through his staunch opposition to NAFTA, his virtually self-funded political campaign, and his businessman persona.

The reason that these topics affected the United States political system so much was that the United States, which for the most part in the last 130 years at this time period, has nearly made America into a 3-party system or even a multiparty system that would differentiate from how Americans believe that our modern-day two-party system feels flawed and uncompromising. Had this taken place, America would have had a significantly different style of government and economics in American society.  

Perot’s Background and Policies

Ross Perot was born in Dallas, Texas, the son of a cotton broker. He attended the US Naval Academy in l953 and was commissioned in 1953. Perot’s military experience undoubtedly helped him relate to ordinary Americans during a time when most males had similar experiences, given the commonality of the draft at this time.

Perot founded his first company, Electronic Data Systems, in 1962. The company primarily focused on Data Processing. However, the company’s stock increased tenfold when the US Government started to invest in the company for medicare analysis purposes. Eventually, in 1984, Perot sold his company for $2.4 billion, which in 2025 terms would be the equivalent of $6.1 billion. Perot eventually took a stance of not endorsing President H. W. Bush in 1992, nor Bill Clinton, due to their similar stances regarding the Gulf War.

Eventually Ross Perot chose to run as President in 1992 due to the significant unpopularity of the nominees of Bill Clinton and George H. W. Bush. Perot ran on a platform of populist platform that was morally focused on the benefits of the people rather than the benefits of the government. Perot prioritized the flaws of both Bill Clinton and George H.W. Bush. He highlighted allegations of sexual harassment against Bill Clinton during his time as governor of Arkansas. Perot attacked Bush for what he considered reckless spending during the Gulf War, and he used Bush’s quote of “no new taxes” on Americans, to attack him for hypocrisy when he approved tax hikes. Perot primarily used this form of politics from the economic strategy he had gained as he learned how to become a billionaire and to help his companies with NAFTA, government spending, and budgeting, as well as sticking to populist social positions at the time, like allowing gays in the military, supporting the death penalty, and supporting the war on drugs. He prioritized these stances during his campaign to help further increase his voter share.

There were a variety of differences between how the public viewed Perot versus established businesses and politicians who primarily endorsed one party already. Both politicians and companies that supported Democrats and Republicans at the time, to a large extent, both thought that Perot would act as a spoiler candidate towards the other party in both presidential elections of 1992 and 1996. However, this was proven as inaccurate, as Perot roughly had an equal amount of supporters that diverted from the Republican and Democratic parties. For the general public, most thought that Perot would go on to win the election in November. In July 1992, ABC News reported a poll that stated that Perot was going to win a plurality vote in the November election of every single state in the US, apart from Washington DC, and Massachusetts, going to Bill Clinton, and Oklahoma going to George H. W. Bush. Most people who threw their support behind Perot voted for him not because they believed he would spoil the election, but rather because they believed that Perot could actually become president and change the country.

The first part of Perot’s near success was his unique and somewhat populist position on the idea of NAFTA, or as acronymed the North American Free Trade Agreement. The idea behind NAFTA was originally started with George H. W. Bush, who created the idea of NAFTA in his final year of the White House in 1992, which was seen as beneficial to the American economy by the Republican Party to help increase free trade between Canada, the United States, and Mexico. “Bush left other foreign policies in an incomplete state. In 1992, his administration succeeded in negotiating a North American Free Trade Agreement (NAFTA), which proposed to eliminate tariffs between Canada, the United States, and Mexico.[4]

He ended up passing the ratification of NAFTA, which most Democrats, even in his party, were still reluctant to pass. “ Even when the administration focused on economics, it still floundered. House Democrats, in particular, believed Clinton made serious missteps in moving away from the party’s traditions. One of his first major moves was to oversee the ratification of the North American Free Trade Act, the agreement with Mexico and Canada that President Bush signed as a lame duck in December 1992. Many top Democrats, including House Majority Leader Dick Gephardt, vehemently opposed the trade agreement as a threat to American workers and the unionized workforce. But Clinton, who embraced many of the tenets of free-market economics, insisted on sticking with the agreement.[5]” The idea behind NAFTA for a majority of politicians who were elected to congress in the early to mid 1990s had support alongside the idea of NAFTA and its increased long-term benefits of free trade, both a majority of Republicans and Democrats supported the act even with the assumption that with the majority of Republicans in the House, that Clinton was giving into the opposing party. “He cobbled together a bipartisan coalition to pass the legislation that would implement the terms of the treaty in August 1993. With his own party’s congressional leaders standing against NAFTA, Clinton had to rely on his erstwhile enemies. Indeed, more Republicans voted to ratify the bill than Democrats: the House passed NAFTA by a vote of 234–200, with 132 Republicans and 102 Democrats in favor; the Senate approved it by a vote of 61–38, with 34 Republicans and 27 Democrats in favor. Though NAFTA represented a rare bipartisan victory for the president, it ultimately cost him the support of several important allies in Congress and other constituencies, while it gained him no new ones.[6]

NAFTA proved to be unpopular with many Americans, which was reflected in the significant decrease of President Clinton’s approval ratings (from 64% to nearly half that, 37%).  The general consensus on NAFTA was staunchly in opposition, believing the treaty would only take American jobs and decrease American wages. “Clinton and a great many economists maintained that breaking down trade barriers forced American exporters to become more efficient, thereby advancing their competitiveness and market share. But some corporations did move operations to Mexico, and pollution did plague some areas near the Mexican-American border. Labor leaders, complaining of the persistent stagnation of manufacturing wages in the United States, continued to charge that American corporations were not only outsourcing their jobs to Mexico (and other cheap labor nations) but were also managing to depress payrolls by threatening to move. When the American economy soured in 2001, foes of NAFTA stepped up their opposition to it.[7]” 

He was adamantly opposed to unhealthy spending with the government, and preferred to work what was best for the American populace. “In the second half of 1993, President Clinton hoped to restore his image as a moderate by pushing for some economic and political reforms. First, he worked in the summer of 1993 to address the federal debt built up in the Reagan and Bush eras. This had been an issue that third-party candidate Ross Perot made central in the 1992 campaign, and Clinton, burnishing his DLC credentials, wanted to demonstrate that Democrats could be the party of fiscal responsibility.[8]” 

Ross Perot, who ran as a minor political candidate capitalized on his proto-populist oppositional stance to NAFTA which as said before was widely viewed as unfavorable to most Americans, so Perot decided to capitalize on his anti-NAFTA stance to seem more favorable to Americans during and after the 1992 election with his quoted answer the NAFTA was cause a ‘giant sucking sound’ to American jobs. “Ross Perot’s campaign against NAFTA criticized the supposed (but in fact nonexistent) ‘giant sucking sound’ that would happen as NAFTA took jobs away from Americans.[9]”  To which, as said before, some American companies, in response to the establishment of NAFTA, did choose to move their companies to Mexico.

The amount of money just from PACs in 1996 was also extremely high as well, with Republicans doubling their funding and Democrats matching roughly the same amount as seen in the previous presidential cycle. “Democratic candidates raised 98.78 million dollars and Democratic committees raised 14.83 million dollars in the 1996 cycle. Republicans doubled that and raised 118.3 million dollars for Republican candidates and 9.12 million dollars from Republican Party committees.[10]” Perot’s campaign finance strategy differed from how Democrats and Republicans previously campaigned. Perot clearly knew that he would have a funding disadvantage as he proceeded to run for president, knowing both major parties would out-fund Perot by the tens of millions, so Perot needed to take the down-to-earth route in regards to funding.

Perot wanted to be seen as a pragmatic, populist, honest figure for the people. First, he used significant funding from his own billionaire wealth, and even took out loans to help fund his own campaigns for 1992 and 1996 Presidential elections. For his 1992 presidential campaign, Perot first ran as an independent candidate. “Texas billionaire Ross Perot bankrolled the final leg of his presidential campaign in part with loans after spending more than $56 million of his own money with no expectation of being repaid, reports showed Friday. Perot listed more than $4.5 million of the $13.9 million he directed to his campaign between Oct. 15 and Nov. 23 as loans received from or guaranteed by himself, the latest report to the Federal Election Commission showed.[11]” However, Ross Perot’s campaigning strategy did not rely only on his own money; Perot also accepted small donations from supporters that would only be allowed to contribute 5 dollars or less to his campaign to achieve more of a down-to-earth appeal. “After stating several times during the talk show that he was not interested in becoming a politician, Mr. Perot, 61 years old, finally hedged his refusal. “If voters in all 50 states put me on the ballot — not 48 or 49 states, but all 50 — I will agree to run, he said. He also said he would not accept more than $5 from each supporter. A week after appearing on the talk show Mr. Perot’s secretary, Sally Bell, said that she had received calls from people in 46 states promising support, as well as many $5 contributions.[12]” This was to further show how he would only run for president if the American people wanted him to run for president, rather than out of his own political aspirations for the time being.

Ross Perot tended to rely on his businessman persona to appear as a strong figure in his politics in economic terms. Ross Perot was able to capitalize on this point of view in two different ways. First, Ross Perot was able to break down his use of economic status by founding and originally running his own political party called the Reform party,  which he ran under only in 1996 while only running as an independent in 1992, his party advocated for the ideologies of Populism, Centrism, and Economic Conservatism, those politics were majorly supported by Americans for the time. He attempted to use his political party to motivate more people to vote for his campaign, now as he was seen as a broader political organization rather than running as an individual leader, however his attempted success in the 1996 under his newly created reform party didn’t achieve a higher percentage of the vote and rather only achieved around 40% of its previous popular vote result from 1992. “Perot ran again in 1996 as the Reform Party candidate and won 8% of the popular vote. With his challenges to mainstream politics, he emerged as one of the most successful third-party candidates in US history, with the most support from across the political spectrum since Theodore Roosevelt.[13]” 

He also endorsed the end of job outsourcing in his basic political views. “Appealing to resentment towards established politicians and advancing himself as a vital third candidate option, Perot campaigned on a platform that included balancing the federal budget, opposition to gun control, the end of job outsourcing, opposition to NAFTA, and popular input on government through electronic direct democracy town hall meetings. Perot challenged his supporters to petition for his name to appear on the ballot in all fifty states.14”  

With Perot’s strong oppositional stance to NAFTA, he recited that his points for his opposition to NAFTA or the North America Free Trade Agreement were that more American jobs would be considerably put in jeopardy compared to jobs in Canada and Mexico, and as stated more companies chose to move their companies to Mexico as the NAFTA ended up hurting their businesses more that ended up creating more holes for American jobs. It also showed that a significant number of Republican and Democratic politicians agreed with Perot, as almost half of all Congressional Republicans and a minority of Congressional Democrats, both in the Senate and the House, opposed these measures that were planned upon the creation of NAFTA, which ended up being endorsed and put in place by President Bill Clinton and a majority of Congress. This was in place until 2020 when President Donald Trump created the USMCA, or the United States/Mexico/Canada agreement, which continued most of the policies in NAFTA. 

Overall, Perot’s presidential campaigns relied on three main points to which his economics nearly broke the 2-party system, his oppositional position on NAFTA and moderate, centrist, fiscally conservative views, his unique form of campaign funding, and utilizing his business man skills and person and creation of his new party to have Americans take Perot’s campaign as a major winnable candidate.

“1996 Federal Campaign Spending up 33% from 1992; Total Candidate and Major Party Disbursements Top $2 Billion.” 1997. Public Citizen. January 30, 1997. https://www.citizen.org/news/1996-federal-campaign-spending-up-33-from-1992-total-candidate-and-major-party-disbursements-top-2-billion/.

“Britannica Money.” 2024. Www.britannica.com. April 1, 2024. https://www.britannica.com/money/Ross-Perot

Gerstle, Gary. 2022. The Rise and Fall of the Neoliberal Order: America and the World in the Free Market Era. New York, Ny: Oxford University Press.

Holmes, Steven A. “THE 1992 ELECTIONS: DISAPPOINTMENT — NEWS ANALYSIS an Eccentric but No Joke; Perot’s Strong Showing Raises Questions on What Might Have Been, and Might Be.” The New York Times, 5 Nov. 1992,

www.nytimes.com/1992/11/05/us/1992-elections-disappointment-analysis-eccentric-but-no-joke-perot-s-strong.html.

Levin, Doron P. 1992. “THE 1992 CAMPAIGN: Another Candidate?; Billionaire in Texas Is Attracting Calls to Run, and $5 Donations.” Archive.org. March 7, 1992.https://web.archive.org/web/20190427005459/https://www.nytimes.com/1992/03/07/us/1992-campaign-another-candidate-billionaire-texas-attracting-calls-run-5.html.

Lichtenstein, Nelson, and Judith Stein. 2023. A Fabulous Failure. Princeton University Press

Los Angeles Times. 1992. “Perot Spent $56 Million of Own, $4.5 Million in Loans on Race.” Los Angeles Times. December 5, 1992. https://www.latimes.com/archives/la-xpm-1992-12-05-mn-1144-story.html

New York Times. (1992). The. 1992. “THE 1992 CAMPAIGN: The Media; Perot’s 30-Minute TV Ads Defy the Experts, Again.” Nytimes.com. The New York Times. October 27, 1992. https://www.nytimes.com/1992/10/27/nyregion/the-1992-campaign-the-media-perot-s-30-minute-tv-ads-defy-the-experts-again.html.

Norris, P. (1993). The 1992 US Elections [Review of The 1992 US Elections ]. Government and Opposition, 28(1), 51–68. “Political Action Committees (PACs).” 2024. OpenSecrets. 2024. https://www.opensecrets.org/political-action-committees-pacs/2024.

Patterson, James T. 2007. Restless Giant : The United States from Watergate to Bush v. Gore. New York/Oxford: Oxford University Press.

Savage, Robert L. “Changing Ways of Calling for Change: Media Coverage of the 1992 Campaign.” American Review of Politics, vol. 14, 1 July 1993, p. 213,

https://doi.org/10.15763/issn.2374-7781.1993.14.0.213-228.

Stiglitz, Joseph. 2015. The Roaring Nineties. Penguin UK. 

Stone, Walter J., and Ronald B. Rapoport. 2001. “It’s Perot Stupid! The Legacy of the 1992 Perot Movement in the Major-Party System, 1994–2000.” Political Science & Politics 34 (01): 49–58. https://doi.org/10.1017/s1049096501000087

 “Third-Party Reformers.” n.d. Digital Public Library of America. https://dp.la/exhibitions/outsiders-president-elections/third-party-reform/ross-perot.

Walker, Martin. 1996. Review of The US Presidential Election, 1996. International Affairs 72 (4): 657–74. https://www.jstor.org/stable/2624114


[1] Martin, Walker,. 1996. Review of The US Presidential Election, 1996. International Affairs 72 (4): pg. 669

[2] Pippa, Norris. (1993). The 1992 US Elections [Review of The 1992 US Elections]. Government and Opposition, 28(1), 51

[3] Walter J, Stone,., and Ronald B. Rapoport. 2001. “It’s Perot Stupid! The Legacy of the 1992 Perot Movement in the Major-Party System, 1994–2000.” Political Science & Politics 34 (01): pg 52 https://doi.org/10.1017/s1049096501000087.  

[4] James T, Patterson. 2005. Restless Giant : The United States from Watergate to Bush v. Gore. New York: Oxford University Press. pg 201-202

[5] James Patterson 2005 Restless Giant pg 208-209

[6] James Patterson 2005 Restless Giant pg 209

[7] James Patterson 2005 Restless Giant pg 334

[8] Kevin Kruse and Julian Zelizer 2019 Fault Lines pg 209

[9] Joseph E, Stiglitz. 2004. The Roaring Nineties : Seeds of Destruction. London: Penguin. pg 203

[10] (“Political Action Committees (PACs)” 2024)

[11] Archives, L. A. Times. 1992. “Perot Spent $56 Million of Own, $4.5 Million in Loans on Race.” Los Angeles Times. December 5, 1992. https://www.latimes.com/archives/la-xpm-1992-12-05-mn-1144-story.html.

[12] (Archives 1992) LA Times. December 5, 1992

[13] “Third-Party Reformers.” n.d. Digital Public Library of America. https://dp.la/exhibitions/outsiders-president-elections/third-party-reform/ross-perot. Pg 1 14 (“Third-Party Reformers,” n.d.) pg 2

Teaching the Black Death: Using Medieval Medical Treatments to Develop Historical Thinking

Few historical events capture students’ attention as immediately as the Black Death. The scale of devastation, the drama of symptoms, and the rapid spread of disease all make it an inherently compelling topic. But beyond the shock value, medieval responses to the plague open the door to something far more important for social studies education: historical thinking. When students first encounter medieval cures like bloodletting, vinegar-soaked sponges, herbal compounds like theriac, or even the infamous “live chicken treatment”, their instinct is often to laugh or dismiss the past as ignorant. Yet these remedies, when studied carefully, reveal a medical system that was logical, coherent, and deeply rooted in the scientific frameworks of its time. Teaching plague medicine provides teachers with a powerful opportunity to challenge presentism, develop students’ contextual understanding, and foster empathy for people whose worldview differed radically from our own. Drawing on research into plague treatments during the Black Death, this article offers teachers accessible background knowledge, addresses common misconceptions, and provides practical strategies and primary-source approaches that use medieval medicine to strengthen disciplinary literacy and historical reasoning in the social studies classroom.

Understanding medieval plague medicine begins with understanding humoral theory, the dominant medical framework of the period. Medieval Europeans believed that the body’s health depended on maintaining balance among the four humors: blood, phlegm, yellow bile, and black bile (Leong, 2017). Illness occurred when these fluids fell out of proportion, making the plague less a foreign invader and more a catastrophic imbalance. Bloodletting was one of the most common responses, meant to “draw off the poisoned blood” and reduce fever. Other strategies included induced vomiting or purging, both intended to remove corrupted humors from the body. Treatises such as Bengt Knutsson’s The Dangers of Corrupt Air emphasized both prevention and treatment through the regulation of sensory experiences, most famously through the use of vinegar (Knuttson, 1994). Its sharp and purifying qualities made it useful for cleansing internal humors or blocking the inhalation of dangerous air. Though these methods seem foreign to modern readers, they reflect a rational system built upon centuries of inherited medical theory, offering students a clear example of how people in the past interpreted disease through the frameworks available to them.

Herbal and compound remedies were equally important in medieval plague treatment and worked in tandem with humoral correction. One of the most famous was theriac, a complex blend of dozens of ingredients including myrrh, cinnamon, opiates, and various roots (Fabbri, 2007). Practitioners believed that theriac fortified the heart and expelled harmful humors, with its complexity symbolizing the combined power of nature’s properties. Other remedies included ginger-infused ale, used to stimulate internal heat, or cupping, which involved applying heated horns or glasses to the skin in order to draw corrupted blood toward the surface. These treatments show the synthesis of classical medical texts, practical experimentation, and local knowledge. When teachers present these treatments in the classroom, students will begin to see medieval medicine not as random or superstitious, but as a sophisticated system shaped by observation, tradition, and reason.

Medieval healing also extended into the emotional and spiritual realms, reflecting the belief that physical and internal states were interconnected. Chroniclers described how fear and melancholy could hasten death, leading many to encourage celebrations, laughter, and community gatherings even during outbreaks. A monastic account from Austria advised people to “cheer each other up,” suggesting that joy strengthened the heart’s resilience. At the same time, religious writers like Dom Theophilus framed plague as both a physical and spiritual crisis, prescribing prayer, confession, and communion as essential components of healing. These practices did not replace medical treatment but complemented it, emphasizing the medieval tendency to view health holistically. Introducing students to these lifestyle-based treatments helps them recognize the complexity of medieval worldviews, where spirituality, emotion, and physical health were deeply intertwined.

Because plague remedies can appear unusual or ineffective to modern students, several misconceptions tend to arise in the classroom. Many students initially view medieval people as ignorant or irrational, evaluating the past through the lens of modern scientific understanding. When teachers contextualize treatments within humoral theory and medieval medical logic, students begin to appreciate the internal coherence of these ideas. Another misconception is that medieval treatments never worked. While these remedies could not cure the plague itself, many offered symptom relief, soothed discomfort, or prevented secondary infections, revealing that medieval medicine was neither wholly ineffective nor devoid of empirical reasoning (Archambeu, 2011). Students also often assume that religious explanations dominated all responses to disease. Examining both medical treatises and spiritual writings demonstrates that medieval responses were multifaceted, blending empirical, experiential, and religious approaches simultaneously. These insights naturally support classroom strategies that promote historical thinking.

Inquiry-based questioning works particularly well with plague treatments. Asking students, “Why would this treatment make sense within medieval beliefs about the body?” encourages them to reason from evidence rather than impose modern judgments. Primary-source stations using texts such as The Arrival of the Plague or The Treatise of John of Burgundy allow students to compare remedies, analyze explanations of disease, and evaluate the reliability and purpose of each author (Horrox, 1994). A creative but historically grounded activity involves inviting students to “design” a medieval plague remedy using humoral principles, requiring them to justify their choices based on qualities such as hot, cold, wet, and dry. Such exercises not only build understanding of the medieval worldview but also reinforce core social studies skills like sourcing, contextualization, and corroboration. Even broader reflections, such as comparing medieval interpretations of disease to modern debates about public health, can help students think critically about how societies make sense of crisis.

Teaching plague medicine carries powerful instructional implications. It fosters historical empathy by encouraging students to see past actions within their cultural context. It strengthens disciplinary literacy through close reading of primary sources and evaluation of evidence. It challenges misconceptions and reduces presentism, helping students develop a mature understanding of the past. The topic also naturally lends itself to interdisciplinary thinking, drawing connections between science, history, culture, and religion. Ultimately, medieval plague treatments offer teachers a rich opportunity to show students how historical interpretations develop through careful analysis of belief systems, available knowledge, and environmental conditions.

The Black Death will always capture students’ imaginations, but its true educational value lies in what it allows them to practice: empathy, critical thinking, and contextual reasoning. By reframing medieval treatments not as bizarre relics but as rational responses grounded in their own scientific traditions, teachers can transform a sensational topic into a meaningful lens for understanding how people in the past made sense of the world. In doing so, plague medicine becomes more than an engaging subject; it becomes a model for how historical study can illuminate the logic, resilience, and humanity of societies long removed from our own.

A fifteenth-century treatise on pestilence. (1994). In R. Horrox (Ed. & Trans.), The Black Death (pp. 193–194). Manchester University Press.

Archambeau, N. (2011). Healing options during the plague: Survivor stories from a fourteenth century canonization inquest. Bulletin of the History of Medicine, 85(4), 531–559. http://www.jstor.org/stable/44452234 

Fabbri, C. N. (2007). Treating medieval plague: The wonderful virtues of theriac. Early

Science and Medicine, 12(3), 247–283. http://www.jstor.org/stable/20617676 

Knutsson, B. (1994). The dangers of corrupt air. In R. Horrox (Ed. & Trans.), The Black Death (pp. 175–177). Manchester University Press.

Paris Medical Faculty. (1994). The report of the Paris medical faculty, October 1348. In R. Horrox (Ed. & Trans.), The Black Death (pp. 158–163). Manchester University Press.

Heinrichs, E. A. (2017). The live chicken treatment for buboes: Trying a plague cure in medieval and early modern Europe. Bulletin of the History of Medicine, 91(2), 210–232. https://www.jstor.org/stable/26311051 

Leong, E., & Rankin, A. (2017). Testing drugs and trying cures: Experiment and medicine in medieval and early modern Europe. Bulletin of the History of Medicine, 91(2), 157–182. https://www.jstor.org/stable/26311049 

The Plague in Central Europe. (1994). In R. Horrox (Ed. & Trans.), The Black Death (pp. 193–194). Manchester University Press. de’ Mussis, G. (1994). The arrival of the plague. In R. Horrox (Ed. & Trans.), The Black Death  (p. 25). Manchester University Press.

The treatise of John of Burgundy. (1994). In R. Horrox (Ed. & Trans.), The Black Death (pp. 184–192). Manchester University Press.

Theophilus, D. (1994). A wholesome medicine against the plague. In R. Horrox (Ed. & Trans.), The Black Death (pp. 149–153). Manchester University Press.

The transmission of plague. (1994). In R. Horrox (Ed. & Trans.), The Black Death (pp. 182–184). Manchester University Press.

Combating and Treating the Black Death

Imagine a deadly disease ripping through your town and the only hope of survival is in the hands of health workers who rely on established medical knowledge and practical methods in desperate attempts to save your lives. During the late Medieval period between 1347 and 1351, the Black Death stirred chaos across Europe including cities in France and Italy, killing millions of people who were in its deadly path. It brought out great fear and uncertainty in surviving resulting in the use of a variety of treatment methods, blending these practices with religious beliefs and supernatural beliefs. These different approaches reveal just how much medical knowledge at the time was shaped by pre-established knowledge, traditional theories, and practical methods from the past, raising the question: How did health workers attempt to treat and combat the plague during the Medieval period? During the medieval period, health workers attempted to combat and treat the Black Death by mixing established medical knowledge and practical methods together. Methods like theriac, bloodletting, air purifications and experimental treatments from the past like imperial powder, put together traditional healing treatments with evolving practices. This approach will show how past medical knowledge and evolving practices were used by health workers to treat and combat the Black Death. This will also show both the intellectual growth and evolution of medical treatments and methods. 

These health workers were very diverse in their levels of medical knowledge; some were volunteers, nuns, inexperienced physicians and barber surgeons. Even though they had diverse levels of expertise, they all played the biggest role in the plague, giving treatments to those who fell victim to the Black Death. This approach highlights the play between practical methods, established medical knowledge, adaptation, and preventive measures in combating the plague. 

Health workers were trying to fight back at the Black Death using practical methods like bloodletting, which was brought up from past medical knowledge and public health rules growing at the time. As health workers were desperately trying to deal with the crisis the Black Death was bringing, the use of practical and hygienic measures were used as an attempt to help those falling ill. One attempt that was seen in treating the plague was the process of bloodletting. Neil Murphy’s article, “Plague Ordinances and the Management of Infectious Diseases in Northern French Towns, c.1450-c.1560,” goes into detail of the developments of public health systems and the ordinances that shaped the responses to the plague.[1] Murphy is arguing that these ordinances emerged from evolving strategies like those in Italy, were connected to cultural and intellectual contexts bringing together medical theories with practical actions. Murphy in this emphasizes the practice of bloodletting, which was performed by barber surgeons or surgeons. This procedure was aimed at removing contaminated blood, slowing down the disease in the body.2 This method shows the connection between the medical theories at the time and practical actions taken, which were shaped by the intellectual contexts of this time.

Past strategies were seen greatly in these attempts along with bloodletting, another we see is attempts in changing emotional and medical practices through survival stories. From survivors’ stories, we can understand attempts made during this time to stop the plague, especially through health workers trying to help based on past medical knowledge and practical treatments similarly to past knowledge on bloodletting. Nicole Archambeau in “Healing Options during the Plague: Survivor Stories from a Fourteenth-Century Canonization Inquest”, shows great emphasis in the intellectual context of medicines and its “miracles” on those it healed, showing how beliefs and medical practices intersected to shape the responses to the plague.[2] At this time, some people wanted healing methods combining religious and practical approaches, including methods of emotional changes. Archambeau argued that “Witnesses had healing options’… their testimonies reveal a willingness to try many different methods of healing, often all at once”[3] This shows how survivors were relying on any type of resources from family, friends and health workers connecting their beliefs and intellectual medical practices at this time. Health workers adapted their methods of helping based on the resources that were available as well as on the patients’ wants and needs. This highlights the adaptability and flexibility these health workers had for their patients and their commitment to help treat those suffering during this time of horror and devastation.

Similarly, through the past medical knowledge, health workers relied on giving treatments that blended intellectual medical knowledge with practical methods to attempt treating the plague. Another piece to these treatments we see is a compound called theriac. Christiane Nockels Fabbri’s article “Treating Medieval Plague: The Wonderful Virtues of Theriac,” shows the use of Theriac, a compound that has been used as an antidote since ancient times, being a crucial treatment during the Black Death. Fabbri argues that the use of Theriac in these treatments demonstrates how health workers applied this traditional remedy to this new disease showing conservatism of these medical practices. Fabbri states how “In plague medicine, theriac was used as both a preventive and therapeutic drug and was most likely beneficial for a variety of disease complaints.”[4] This shows how health workers relied on this because of its practical efficiency and its intellectual and cultural significance in the past.

From these three sources, it is clear to see how they all were similar in how health workers tended to link past medical knowledge with their practical methods to help suffering, showing how they attempted to go about treating the plague. Treatments like bloodletting, personal wanted miracle methods and theriac were just a few of the ways they attempted to help those who got sick. My analysis highlights how these treatments were based on public health measures that were put into cities to help maintain and stop the spreading of the plague. Ordinances aimed to help isolate the disease and keep calm over the chaos that the plague was bringing into town. These helped to create a framework that helped health workers approach how they would attempt to treat those who fell sick.

One of the main and well-known treatments given by health workers during this time was a drug called theriac. This type of medicine was extremely popular in its effectiveness and was wanted by victims once they fell ill or were scared that they would fall ill. In “The real Theriac – panacea, poisonous drug or quackery?” by Raj, Danuta, Katarzyna Pękacka-Falkowska, Maciej Włodarczyk and Jakub Węglorz, talks about this compound and its ability to remove diseases and poison from the body and how it was a well-known and used drug during the medieval period; “Consequently, Theriac was being prepared during epidemics, especially the plague  (Black Death), in large quantities as a form of emergency medicine (Griffin, 2004).”[5] Relying on theriac as a direct treatment, health workers showed their commitment to using this accessible great drug that was well known, to make people confident that this treatment would work during a time of uncertainty and devastation.

Correspondingly, we see another direct form of treatment that health workers used to treat those who had the plague, bloodletting. Health workers would prick veins to do this.  This was a way of extracting bad blood from the body to restore its balance. We see this in document 62 “The Treatise of John of Burgundy, 1365” written by John Burgundy. It projects the practical medical knowledge at the time that health workers were applying to treat those who have been hit with the Black Death. Burgundy continues to talk about the use of bloodletting, informing that “If, however, the patient feels prickings in the region of the liver, blood should be let immediately from the basilic vein of the right arm (that is the vein belonging to the liver, which is immediately below the vein belonging to the heart)”[6]. He is giving a specific technique to address this issue, giving us a practical method of treatment that shows how health workers used these hands-on treatments to combat the plague

These two methods were greatly known during the medieval period. They both offered hope to those who were desperate and wanting treatment so they would not die. These treatments at this time offered the feeling of control to the scary situation for its victims and gave a sense of hope to get better. Knowing theriac and bloodletting were used as treatment for victims, it helped to feel less overwhelmed and made it seem like health workers would be the redeeming feature to their deadly crisis.

Established Medical Knowledge

During the medieval period, health workers were able to recognize and understand that miasma, contaminated air, was the main causing factor of why the Black Death was spreading so much and killing everyone in its path. Due to this understanding, they implemented environmental purification strategies to end exposure of miasma. “The dangers of corrupted air” by Bengt Knutsson, shows great emphasis on this fear of the contaminated air and goes into methods that were used and done to cleanse the space and environment people were living in. A practice that health workers implemented to stop the miasma from taking over was to “Therefore let your house be clean and make clear fire of wood flaming. Let your house be made with fumigation of herbs, that is to say with leaves of bay tree, juniper…”[7] while also explaining opening windows at certain times and remedies if you feel sick.[8] These techniques reflect how established medical knowledge can be used in order to come up with ways to treat and combat the plague. Including the purification methods into the plague’s prevention by health workers, they were able to adapt with their knowledge on air quality and turn that into strategies to combat the Black Death. 

Through the fears of the Black Death, health workers were relying on past medical knowledge, practices and strategies to manage the spread of this disease and to treat those who have been infected. The “Ordinances against the spread of plague, Pistoia, 1348” elaborates on how these workers used their past medical knowledge to reduce the spread and create a safer environment to go about treatments. This chronicler explains limiting your exposure to those who are ill by completely restricting people and patients’ interactions.[9] This will provide health  workers with the safest opportunity to apply these treatments, like bloodletting or giving theriac, in a more controlled environment. This approach further reflects the combination of traditional medical knowledge and practical adaptations so then health workers could attempt to combat the plague’s destruction.

Health workers relied heavily on past medical knowledge and theories during this time of uncertainty to combat the Black Death, bringing together adaptations with established knowledge. The understanding of bad air being the cause helped them greatly in purification techniques like burning the herbs to mask the miasma. The ordinances stressing the need for isolation and restriction for interactions to give a safer environment for the health workers showed their adaptability to meet the demands of the plague as well as their preservation of historical medical theories of those in the past doing it. This shows the continuity and innovation that came during this period when trying to understand and combat the plague. 

One way that health workers attempted to treat and combat the plague was through the development of treatments that were adapted from past medical knowledge. An example of this was imperial powder, in John Burgundy’s “The Treatise of Burgundy, 1365” being known as a “powerful preventative” that was thought of to be stronger than theriac. Burgundy explains how “gentile emperors used it against epidemic illness, poison and venom, and against the bite of serpents and other poisonous animals”[10] This powder was made from some herbs like St John’s wort, medicinal earth from Lemnos and dittany which shows us the diverse ingredients to kill off poison that were believed from the past and venoms that were inside the body. To use this powder, they would either apply it directly to the skin or by mixing it with a drink like wine for ingestion purposes. This shows the health workers willingness to experiment with past medical treatments to adapt it to the current plague they were going through, to find a better treatment for the Black Death. 

Looking past medical treatments, to do them, health workers were implementing strict isolation strategies in order to combat and limit the spread of plague while also keeping the environment safe in order to treat those who fell ill. “The plague in Avignon” by Louis Heyligen shows emphasis on this isolation of staying away from neighboring areas and people so then health workers can do what they needed to do to help. This was an attempt made to manage the spreading of the disease through the town.  It states how “…avoid getting cold, and refrain from any excess, and above all mix little with people – unless it be with few who have healthy breath; but it is best to stay at home until the epidemic has passed”[11].  Having this advising gives the reflection of the public health strategies that were employed in the cities being tied to medical treatments, because limiting the exposure would directly allow more health workers to safely treat those who were sick and in need of treatments. Trying to minimize contact with one another was a great strategy in controlling the transmission to get the disease to slow down in spreading. From the emotions brought on from the Black Death, it shows the willingness people were taking, to make it safer conditions outside for families and health workers.

Combining both the experimental treatments like imperial powder with the isolation policies, it opened the view of just how much health workers were combining the preexisting medical knowledge with their preventative measures to successfully combat the plague while treating it. Having this adaptability further influences medical practices and lays a greater foundation for future prevention strategies for diseases that come. 

In conclusion, we have explored several ways in which health workers attempted to treat the plague and combat it through pre-stablished medical knowledge and practical methods. These health workers, being remarkably diverse in who they were, applied many strategies and methods that were used including enforcing strict public health ordinances, the practice of bloodletting by barber surgeons, air purifications, use of Theriac and experimenting with the use of the imperial powder to attempt treating the plague. These health workers showed great standing adaptability to what was going on while building off the existing knowledge of medical treatments to address the deadliest crisis in history. This analysis gives a deeper understanding of medical knowledge and how they used their past resources to understand and try to save those who contracted this disease. Also, this shows how these attempts were deeply rooted into the intellectual history of these times through health workers drawing information from past medical scholars and past knowledge to gain a better understanding in how to perform their practices and methods. Involving themselves in this intellectual history, they were putting a building block on top of centuries of their medical knowledge through experimenting with it and adding new responses to how they attempted to treat their new disease. These contributions to the Black Death only strengthens our understanding of past medical history during the Black Death and past centuries. 

Archambeau, Nicole. “Healing Options during the Plague: Survivor Stories from a Fourteenth-Century Canonization Inquest.” Bulletin of the History of Medicine 85, no. 4 (2011):  531–59. http://www.jstor.org/stable/44452234.

Burgundy, “The Treatise of Burgundy, 1365” pp.184-193

Chiappelli, A. “Ordinances against the Spread of Plague, Pistoia, 1348.” pp 194- 203

Fabbri, Christiane Nockels. “Treating Medieval Plague: The Wonderful Virtues of Theriac.” Early Science and Medicine 12, no. 3 (2007): 247–83. Retrieved from http://www.jstor.org/stable/20617676. Heyligen, “The Plague in Avignon.” pp.41-45

Horrox, R., ed. The Black Death (Manchester: Manchester University Press, 1994).

Knutsson, “The dangers of corrupted air” pp.173-177  

 Murphy, Neil. “Plague Ordinances and the Management of Infectious Diseases in Northern French Towns, c.1450–c.1560.” In The Fifteenth Century XII: Society in an Age of Plague, edited by Linda Clark and Carole Rawcliffe, 139-160. Woodbridge: Boydell & Brewer, 2013

Raj, Danuta, Katarzyna Pękacka-Falkowska, Maciej Włodarczyk, and Jakub Węglorz. 2021.  “The Real Theriac – Panacea, Poisonous Drug or Quackery?” Journal of          Ethnopharmacology 281 (December): N.PAG. doi:10.1016/j.jep.2021.114535.   


[1] Murphy, Neil. “Plague Ordinances and the Management of Infectious Diseases in Northern French Towns, c.1450–c.1560.” In The Fifteenth Century XII: Society in an Age of Plague, edited by Linda Clark and Carole Rawcliffe, 139-160. Woodbridge: Boydell & Brewer, 2013 2 Murphy, 146.

[2] Archambeau, Nicole. “Healing Options during the Plague: Survivor Stories from a Fourteenth Century Canonization Inquest.” Bulletin of the History of Medicine 85, no. 4 (2011): 531–59. http://www.jstor.org/stable/44452234.

[3] Archambeau, 537.

[4] Fabbri, Christiane Nockels. “Treating Medieval Plague: The Wonderful Virtues of Theriac.” Early Science and Medicine 12, no. 3 (2007): 247–83. http://www.jstor.org/stable/20617676.  

[5] Raj, Danuta, Katarzyna Pękacka-Falkowska, Maciej Włodarczyk, and Jakub Węglorz. 2021. “The Real Theriac – Panacea, Poisonous Drug or Quackery?” Journal of Ethnopharmacology 281 (December): N.PAG.

[6] Burgundy, “The Treatise of Burgundy, 1365” in The Black Death, ed. And trans. Rosemary Horrox (Manchester: Manchester University Press, 1994), 189.

[7] Knutsson, “The dangers of corrupted air” p.176

[8] Knutsson, “The dangers of corrupted air,” p.176

[9] Chiappelli, “Ordinances against the spread of plague, Pistoia, 1348,” p. 195 

[10] Burgundy, “The Treatise of Burgundy, 1365” p.190

[11] Heyligen, “The Plague in Avignon” p. 45

Unseen Fences: How Chicago Built Barriers Inside its Schools

Northern public schools are rarely ever centered in national narratives of segregation. Yet as Thomas Sugrue observes, “even in the absence of officially separate schools, northern public schools were nearly as segregated as those in the south.”[1] Chicago Illustrates this, despite the Jim Crow laws, the city developed a racially organized educational system that produced outcome identical to those segregated in southern districts.  The city’s officials celebrated equality while focusing on practices that isolated black students in overcrowded schools. The north was legally desegregated and was not pervasive but put into policies and structures of urban governance.

This paper argues that Chicago school segregation was intentional. It resulted from a coordinated system that connected housing discrimination, political resistance to integration, and targeted policies crafted to preserve racial separation in public schools. While Brown v. Board of Education outlawed segregation by law, Chicago political leaders, school administration, and networks maintained it through zoning, redlining, and administrative manipulation. Using both primary source, newspapers NAACP records, and a great use of historical scholarship, this paper shows how segregation in Chicago was enforced, defended, challenged, and exposed by the communities that it harmed.

The historical context outlined above leads to several central research questions that guide this paper. First, how did local governments and school boards respond to the Brown v. Board of Education decision, and how did their policies influence the persistence of segregation in Chicago? Second, how did housing patterns and redlining contribute to the continued segregation of schools? Third, how did the racial dynamics of Chicago compare to those in other northern cities during the same period?

These questions have been explored by a range of scholars. Thomas Surgue’s Sweet Land of Liberty provides the framework for understanding northern segregation as a system put in the local government rather than state law. Sugrue argues that racism in the north was “structural, institutional, and spatial rather than legal, shaped through housing markets, zoning decisions, and administrative policy. His work shows that northern cities constructed segregation through networks of bureaucratic authority that were hard to challenge. Sugrue’s analysis supports the papers argument by demonstrating that segregation in Chicago was not accidental but maintained through everyday decisions.

Philip T.K. Daniel’s scholarship deepens this analysis of Chicago by showing how school officials resisted desegregation both before and after Brown v. Board. In his work A History of the Segregation-Discrimination Dilemma: The Chicago Experience, Daniel shows that Chicago public school leaders manipulated attendance boundaries, ignored overcrowding schools, and defended “neighborhood schools” as the way to preserve racial separation. Daniel highlights that “in the years since 1954 Brown v. Board of Education decision, research have repeatedly noted that all black schools are regarded inferior.”[2] Underscoring the continuing of inequality despite federal mandates. Daniel’s findings reinforce these papers claim that Chicago’s system was made intentional, and the local officials played a high role in maintaining segregation.

Dionne Danns offers a different perspective by examining how students, parents, and community activists responded to the Chicago public school’s discriminatory practices. In Crossing Segregated Boundaries, her study of Chicago’s High School Students Movement, Danns argues that local activism was essential to expose segregation that officials tied to hide. She shows that black youth did not just fix inequalities of their schools but also developed campaigns, boycotts, sit-ins, which challenged Chicago Public School officials and reshaped the politics of education. Danns’ work supports the middle portion of this paper, it analyzes how community resistance forced Chicago’s segregation practices in a public view.

Paul Dimond’s Beyond Busing highlights how the court system struggled to confront segregation in northern cities because it did not connect with the law. Dimond argues that Chicago officials used zoning, optional areas, intact busing, and boundaries to maintain separation while avoiding the law. He highlights that, “the constant thread in the boards school operation was segregation, not neighborhood,”[3] showing that geographic justification was often a barrier for racial intent. Dimond’s analysis strengthens the argument that Chicago’s system was coordinated and on purpose, built through “normal” administrative decisions.

Jim Carl expands the scholarship into the time of Harold Washington, showing how political leadership shaped the educational reform. Carl argues that Washington believed in improving black schools not through desegregation but through resource equity and economic opportunities for black students. This perspective highlights how entrenched the early segregation policies were, reformers like Washington built a system that was made to disadvantage black communities. While Carl’s focus is later in the Papers period, his work provides the importance of how political structure preserved segregation for decades.

Chicago’s experience with segregation was both typical and different among the northern cities. Cities like Detroit, Philadelphia, and New York faced similar challenges. Chicago’s political machine created these challenges. As Danns explains in “Northern Desegregation: A Tale of Two Cities”, “Chicago was the earliest northern city to face Title VI complaint. Handling the complaint, and the political fallout that followed, left the HEW in a precarious situation. The Chicago debacle both showed HEW enforcement in the North and West and the HEW investigating smaller northern districts.”[4]  This shows how much political interest molded the cities’ approach to desegregation, and how federal authorities had a hard time holding the local systems responsible. The issue between the local power and federal power highlighted a broader national struggle for civil rights in the north, and a reminder that racial inequality was not only in one region but in the entire country. Chicago’s challenge highlights the issues of producing desegregation in areas where segregation was less by the law, and more by policies and politics.

Local policy and zoning decisions made segregation rise even more. In Beyond Busing, Paul R. Dimond says, “To relieve overcrowding in a recently annexed area with a racially mixed school to the northeast, the Board first built a school in a white part and then rejected the superintendent’s integrated zoning proposal to open new schools…. the constant thread in the Board’s school operations was segregation, not neighborhood.”3 These decisions show policy manipulation, rather than the illegal measures that maintained separation.

Dimond further emphasizes the pattern: “throughout the entire history of the school system, the proof revealed numerous manipulations and deviations from ‘normal’ geographic zoning criteria in residential ‘fringes’ and ‘pockets,’ including optional zones, discontinuous attendance areas, intact busing, other gerrymandering and school capacity targeted to house only one race; this proof raised the inference that the board chose ‘normal’ geographic zoning criteria in the large one-race areas of the city to reach the same segregated result.”3  These adjustments were hard but effective in strengthening segregation by making sure even when schools were open, the location, and resource issuing meant that black students and white students would have different education environments. The school board’s actions show a bigger strategy for protecting the status quo under the “neighborhood” schools and making it understandable that segregation was not an accident but a policy.

On the other hand, Carl highlights the policy solutions that are considered for promoting integration, other programs which attract a multiracial, mixed-income student body. Redraw district lines and place new schools to maximize integration… busing does not seem to be an issue in Chicago…it should be obviously metro wide, because the school system is 75 percent minority.” [5]. This approach shows the importance of system solutions that go beyond busing, and integration requires addressing the issue of racial segregation in schools. Carl’s argument suggests that busing itself created a lasting change. By changing district lines, it is not just about moving the children around, but to change the issues that reinforce segregation.

Understanding Chicago’s segregation requires comparing northern and southern practices. Unlike the south, where segregation was organized in law, northern segregation was de facto maintained through residential patterns, local policies, and bureaucratic practices. Sugrue explains, “in the south, racial segregation before Brown was not fundamentally intertwined with residential segregation.”1. This shows how urban geography and housing discrimination shaped educational inequality in northern cities. In Chicago, racial restrictive, reddling, confined black families to specific neighborhoods, and that decided which school the children could attend. This allowed northern officials to say that segregation was needed more than as a policy.

Southern districts did not rely on geographic attendance zones to enforce separation; “southern districts did not use geographic attendance zones to separate black and whites.”1. In contrast, northern cities like Chicago used zones and local governance to achieve smaller results. Danns notes, “while legal restrictions in the south led to complete segregation of races in schools, in many instances the north represented de facto segregation, which was carried out as a result of practice often leading to similar results”4. This highlights the different methods by segregation across regions, even after the legal mandates for integration. In the south, segregation was enforced by the law, making the racial boundaries clear and intentional.

Still, advocacy groups were aware of the nationwide nature of this struggle. In a newspaper called “Key West Citizen” it says, “a stepped-up drive for greater racial integration in public schools, North and South is being prepared by “negro” groups in cities throughout the country.”  Resistance for integration could take extreme measures, including black children to travel long distances to go to segregated schools, while allowing white children to avoid those schools. In the newspaper “Robin Eagle” it notes, “colored children forced from the school they had previously attended and required to travel two miles to a segregated school…white children permitted to avoid attendance at the colored school on the premise that they have never been enrolled there.” [6] These examples show how resistance to integration represents a national pattern of inequality. Even though activist and civil rights groups fought for the educational justice, the local officials and white communities found ways to keep racial segregation. For black families, this meant their children were affected by physical and emotional burdens of segregation like, long commutes, bad facilities, and reminder of discrimination. On the other hand, white students received help from more funding and better-found schools. These differences show how racial inequality was within American education, as both northern and southern cities and their systems worked in several ways.

Understanding Chicago’s segregation requires comparing northern and southern practices. Unlike the south, where segregation was organized in law, northern segregation was de facto maintained through residential patterns, local policies, and bureaucratic practices. Sugrue explains, “in the South, racial segregation before Brown was not fundamentally intertwined with residential segregation.”1. This shows how urban geography and housing discrimination shaped educational inequality in northern cities. In Chicago, racial restrictive, reddling, confined black families to specific neighborhoods, and that decided which school the children could attend. This allowed northern officials to say that segregation was needed more than as a policy.

Southern districts did not rely on geographic attendance zones to enforce separation; “southern districts did not use geographic attendance zones to separate black and whites.”1 In contrast, northern cities like Chicago used zone and local governance to achieve smaller results. Danns notes, “while legal restrictions in the south led to complete segregation of races in schools, in many instances the north represented de facto segregation, which was carries out as a result of practice often leading to similar results”.4 This highlights the different methods by segregation across regions, even after the legal mandates for integration. In the South, segregation was enforced by the law, making the racial boundaries clear and intentional.

Yet the advocacy groups were aware of the nationwide nature of this struggle. In a newspaper called “Key West Citizen” it says, “a stepped-up drive for greater racial integration in public schools, North and South is being prepared by “negro” groups in cities throughout the country.” Resistance for integration could take extreme measure, including black children to travel long distances to go to segregated schools, while allowing white children to avoid those schools. These examples show how resistance to integration represents a national pattern of inequality. Even though activist and civil rights groups fought for educational justice, the local officials and white communities found ways to keep racial segregation. For black families, this meant their children were affected by physical and emotion burdens of segregation like, long commutes, bad facilities, and reminder of discrimination. On the other hand, white students received help from more funding and better-found schools. These differences show how racial inequality was within American education, as both northern and southern cities and their systems worked in several ways.

The policies that shaped Chicago schools in the 1950’s and 1960’s cannot be understood without looking at key figures such as Benjamin Willis and Harold Washington. Benjamin Willis, who was a superintendent of Chicago Public Schools from 1953 to 1966 and became known for his resistance to integration efforts. Willis’ administration relied on the construction of mobile classrooms, also known as “Willis wagons,” to deal with the overcrowding of Black schools. Other than reassigning students to nearby under-enrolled schools, Willis placed these classrooms in the yards of segregated schools. As Danns explains, Willis was seen by Chicagoans as the symbol of segregation as he gerrymandered school boundaries and used mobile classrooms (labeled Willis Wagons) to avoid desegregation.”4  . His refusal to implement desegregation measures made him a target of protest, including boycotts led by families and students.

On the other hand, Harold Washington, who would become Chicago’s first black mayor, represented a shift towards community-based reform and equality-based policies. Washington believed that equality in education required more than racial integration, but it needed structural investment in Black schools and economic opportunities for Black students. Jim Carl writes, Washington’s approach, “Washington would develop over the next thirty-three years, one that insisted on adequate resources for Black schools and economic opportunities for Black students rather than viewing school desegregation as the primary vehicle for educational improvement.”5 His leadership came from the earlier civil rights struggles of the 1950’s and 1960’s with the justice movements that came in the post-civil rights era.

Chicago’s experience in the mid-twentieth century provides an example of how racial segregation was maintained through policy then law.  In the postwar era, there was an increase in Chicago’s population. Daniel writes, “this increased the black school population in that period by 196 percent.”4. By the 1950’s, the Second Great Migration influenced these trends, with thousands of Black families arriving from the south every year. As Sugrue notes, “Blacks who migrated Northern held high expectations about education.” 1.   There was hope the northern schools would offer opportunities unavailable in the South. Chicago’s public schools soon became the site of racial conflict as overcrowding; limited resources, and administrative discrimination showed the limits of those expectations.

One of the features of Chicago’s educational system is the era of the “neighborhood schools” policy. On paper, this policy allowed students to attend schools near their homes, influencing the community. In practice, it was a powerful policy for preserving racial segregation. Sugrue explains, “in densely populated cities, schools often within a few blocks of one another, meaning that several schools might serve as “neighborhood”.”1. Because housing in Chicago was strictly segregated through redlining, racially restrictive areas, and de facto residential exclusion, neighborhood-based zoning meant that Black and white students were put into separate schools. This system allowed city officials to claim that segregation reflected residential patterns rather than intentional and avoiding the violation of Brown. A 1960 New York Times article called, “Fight on Floor now ruled out” by Anthony Lewis, revealed how Chicago officials publicly dismissed accusations of segregation while internally sustaining the practice. The article reported that school leaders insisted that racial imbalance merely reflected “neighborhood conditions” and that CPS policies were “not designed to separate the races,” even as Black schools operated far beyond capacity.”[7] This federal-level visibility shows that Chicago’s segregation was deliberate: officials framed their decisions as demographic realities, even though they consistently rejected integration measures that would have eased overcrowding in Black schools.

The consequences of these policies became visible by the 1960’s. Schools in Black neighorhoods were overcrowded, operating on double shifts or in temporary facilities. As Dionne Danns describes in Northern Desegregation: A Tale of Two Cities, she says, “before school desegregation, residential segregation, along with Chicago Public School (CPS) leaders’ administrative decisions to maintain neighbor-hood schools and avoid desegregation, led to segregated schools. Many Black segregated schools were historically under-resourced and overcrowded and had higher teacher turnover rates.”[8] The nearby white schools had empty classrooms and more modern facilities. This inequality sparked widespread community outrage, setting up the part for the educational protest that would define Chicago’s civil rights movement.

The roots of Chicago’s school segregation related to its housing policies. Redlining, the practice by which federal agencies and banks denied loans to Black homebuyers and systematically combined Black families to certain areas of the city’s south and west sides. These neighborhoods were often shown by housing stock, limited public investment, and overcrowding. Due to this policy, school attendance zones were aligned with neighborhood boundaries, these patterns of residential segregation were mirrored with the city’s schools. As historian Matthew Delmont explains in his book, Why Busing Failed, this dynamic drew the attention of federal authorities: “On July 4, 1965, after months of school protest and boycotts,  civil rights groups advocated in Chicago by filing a complaint with the U.S. Office of Education charging that Chicago’s Board of Education violated Title VI of the Civil Rights Act of 1964.”[9] This reflected how much intertwined housing and education policies were factors of racial segregation. The connection between where families could live and where their children could attend school showed how racial inequality was brought through everyday administrative decisions, and molding opportunities for generations of black Chicagoans.

These systems, housing, zoning, and education helped maintain a racial hierarchy under local control. Even after federal courts and civil rights organizations pushed for compliance with Brown, Chicago’s officials argued that their schools reflect demographic reality rather than discriminatory intent. This argument shows how city planners, developers, and school administrators collaborated. School segregation was not a shift from southern style Jim Crow, but a defining feature of North governance.

Chicago’s struggle with school segregation was not submissive. Legal challenges and community activism were tools in confronting inequalities. The NAACP Legal Defense Fund filed many lawsuits to challenge these policies and targeted the districts that violated the state’s education law. Parents and students organized boycotts and protests and wanted to draw attention to the injustices. Sugrue notes, “the stories of northern school boycotts are largely forgotten. Grassroots boycotts, led largely by mothers, inspired activists around the country to demand equal education”1.  The boycotts were not symbolic but strategic; community driven actions targeted at the system’s resistance to change. These movements represented an assertion of power from communities that had to be quiet by discriminatory policies. Parents, especially black mothers, soon became figures in these campaigns, using their voices, and organizing ways to demand responsibility from school boards and city officials. Their actions represented the change that would not come straight from the courtrooms, but from the people affected by injustice. The boycotts interrupted the normal school system and forced officials to listen to the demands for equal education. 

Danns emphasizes the range of activism during this period, writing in Chicago High School Students’ Movement for Quality Public Education: “in the early 1960’s, local and prominent civil rights organizations led a series of protests for school desegregation. These efforts included failed court cases, school boycotts, and sit-ins during superintendent Benjamin Willis administration, all which led to negligible school desegregation”[10]. Despite the limited success of these efforts, the activism of the 1960’s was important for exposing the morals of northern liberalism, and the continuing of racial inequalities outside the South. Student-led protests and communities organizing, not only challenged the policies of the Chicago Board of Education but also influenced the new generation for young people to see education as a main factor in the struggle for civil rights.

Legal tactics were critical in enforcing agreements. An article from the NAACP Evening Star writes, “on the basis of an Illinois statute which states that state-aid funds may be withheld from any school district that segregated based on race or color.” [11]The withholding of state funds applied pressure on resistant boards, showing that legal leverage could have consequences. When the board attempted to deny black students’ admission, the NAACP intervened.  In the newspaper “Evening Star”, They reported, “Although the board verbally refused to admit negro students and actually refused to do so when Illinois students applied for admission, when the board realized that the NAACP was going to file suit to withhold state-aid funds, word was sent to each student who had applied that they should report to morning classes.” [12]This shows how legal and financial pressure became one of the effective ways for enforcing desegregation. The threat of losing funds forced the school boards to work with the integration orders, highlighting the appeals were inadequate to undo the system of discrimination. The NAACP’s strategy displayed the importance of defense with legal enforcement, using the courts and states’ statutes to hold them accountable. This illustrated that the fight for educational equality required not only the protest, but also the legal base to secure that justice was to happen. This collaboration of legal action and grassroots mobilization reflects the strategy that raised both formal institutions and community power, showing the northern resistance to desegregation was far from being unchanged.

Chicago’s segregated schools had long-lasting effects on Black students, particularly through inequalities in the education system. Schools in Black neighborhoods were often overcrowded, underfunded, and provided fewer academic resources than their white counterparts. These disparities limited educational opportunities and shaped students’ futures. The lack of funding meant that schools could no longer afford placement courses, extracurricular programs, or even resources for classrooms, this shaped a gap in the quality of education between and black and white students. Black students in these kinds of environments were faced with educational disadvantages, but also less hope on their future.

Desegregation advocates sought to address both inequality and social integration. Danns explains, “Advocates of school desegregation looked to create integration by putting students of different races into the same schools. The larger goal was an end to inequality, but a by-product was that students would overcome their stereotypical ideas of one another, learn to see each other beyond race, and even create interracial friendships”4. While the ideal of desegregation included fostering social understanding, the reality of segregated neighborhoods and schools often hindered these outcomes. Even when legal policies aimed to desegregate schools, social and economic blockades continued to bring separation. Many white families moved to suburban districts to avoid integration. This created more classrooms to be racially diverse and left many of the urban schools attended by students of color.

The larger society influenced students’ experiences inside schools, despite efforts to create inclusive educational spaces. Danns explains, “In many ways, these schools were affected by the larger society; and tried as they might. Students often found it difficult to leave their individual, parental, or community views outside the school doors”9 Even when students developed friendships across racial and ethnic lines, segregated boundaries persisted: “Segregated boundaries remained in place even if individuals had made friends with people of other racial and ethnic groups”4. The ongoing influence of social norms and expectations meant that schools were not blinded by the racial tensions that existed outside their walls. While the teachers and administration may have tried to bring a more integrated environment, the racial hierarchies and prejudices in the community often influenced the students’ interactions. These hurdles were not always visible, but they shaped the actions within the school in fine ways. Despite the efforts at inclusion, the societal context of segregation remained challenging, and limited the integration and equality of education.

Beyond the social barriers, the practical issue of overcrowding continued to affect education. Carl highlights this concern, quoting Washington: “In interest, Washington stated that the issue ‘is not “busing,” it is freedom of choice. Parents must be allowed to move their children from overcrowded classrooms. The real issue is quality education for all’5. The focus on “freedom of choice” underscores that structural inequities, rather than simple policy failures, were central to the ongoing disparities in Chicago’s schools.

Overcrowding in urban schools was a deeper root to inequality. Black neighborhoods were often left with underfunded and overcrowded schools, while the white schools had smaller classes, and more resources. The expression of “freedom of choice” was meant to show that parents in marginalized communities should all have the same educational opportunity as the wealthier neighborhoods. However, this freedom was limited by residential segregation, unequal funding, and barriers that restricted many within the public school system.

The long-term impact of segregation extended beyond academics into the social and psychological lives of Black students. Segregation reinforced systemic racism and social divisions, contributing to limited upward mobility, economic inequality, and mistrust of institutions. Beyond the classroom, these affects shaped how the black students viewed themselves and where they stand in society. Psychologically, this often resulted in lower self-esteem and no academic motivation. Socially, segregation limited interactions between the different racial groups, and formed stereotypes. Overtime, these experiences came from a cycle in the issue of educational and government institutions, as black communities struggled with inequalities continuously.

  Black students were unprepared for the realities beyond their segregated neighborhoods, “Some Black participants faced a rude awakening about the world outside their high schools. Their false sense of security was quickly disrupted in the isolated college towns they moved to, where they met students who had never had access to the diversity they took for granted”9. This contrast between the relative diversity within segregated urban schools and the other environments illustrates how deeply segregation shaped expectations, socialization, and identity formation.

Even after desegregation policies were implemented, disparities persisted in access to quality education. Danns observes that, decades later, access to elite schools remained unequal: “After desegregation ended, the media paid attention to the decreasing spots available at the city’s top schools for Black and Latino students. In 2018, though Whites were only 10 percent of the Chicago Public Schools population, they had acquired 23 percent of the premium spots at the top city schools”7. This statistic underscores the enduring structural and systemic inequalities in the educational system. These inequalities show how racial privilege and access to resources favored by certain groups and disadvantaged others. Segregation has taken new ways, through economic and residential patterns rather than laws. This highlights the policy limitations, and brings out the need for more social, economic, and institutional change to achieve the goal of educational equality.

Segregation not only restricted access to academic resources but also had broader psychological consequences. By systematically limiting opportunities and reinforcing racial hierarchies, segregated schooling contributed to feelings of marginalization and diminished trust in public institutions. The experience of navigating a segregated school system often left Black students negotiating between a sense of pride in their communities and the constraints imposed by discriminatory policies. The lasting effects of these psychological scars were there long after segregation ended. The pain from decades of separation made it hard for many black families to believe in change that brought equality. Segregation was not an organized injustice, but also an emotional one; shaping how generations of students understood their worth, and connection to a system that let them down before.

The structural and social consequences of segregation were deeply intertwined. Overcrowded and underfunded schools have diminished educational outcomes, which in turn limit economic and social mobility. Social and psychological barriers reinforced these disparities, creating a cycle that affected multiple generations. Yet the activism, legal challenges, and community efforts described earlier demonstrate that Black families actively resisted these constraints, fighting for opportunities and equality. Their fight not only challenged the system’s injustice, but also laid a foundation for more civil rights reforms, and influencing future movements.

By examining Chicago’s segregation in the context of broader northern and national trends, it becomes clear that local policies and governance played an outsized role in shaping Black students’ experiences. While southern segregation was often codified in law, northern segregation relied on policy, zoning, and administrative practices to achieve similar results. The long-term impact on Chicago’s Black communities reflects the consequences of these forms of institutionalized racism, emphasizing the importance of both historical understanding and ongoing policy reform.

Chicago’s school segregation was not accidental or demographic, it was a product of housing, political and administrative decisions designed to preserve racial separation. The city’s leaders made a system that mirrored the thinking behind Jim Crow Laws and its legal framework, making northern segregation more challenging to see. Through policies made in bureaucratic language, Chicago Public Schools and city officials made sure that children got unequal education for decades.

The legacy of Chicago’s segregation exposes the character of educational inequality. Although activists, parents, and students fought to expose the challenges and the discrimination they created in the mid-twentieth century to continue to shape educational output today. Understanding the intentional design behind Chicago’s segregation is essential to understanding the persistence racial inequalities that defines American schooling. It is also a call to action reformers today to confront the historical and structural forces that have made these disparities. The fight for equitable education is not just about addressing the present-day inequalities but also dismantling the policies and systems that were built with the purpose of maintaining racial separation. The struggle for equality in education remains unfinished, and by acknowledging the choices that lead to the situation can be broken down by structures that continue to limit opportunities for future generations.

Evening Star. (Washington, DC), Oct. 23, 1963. https://www.loc.gov/item/sn83045462/1963-10-23/ed-1/.

Evening Star. (Washington, DC), Oct. 22, 1963. https://www.loc.gov/item/sn83045462/1963-10-22/ed-1/.

Evening Star. (Washington, DC), Sep. 8, 1962. https://www.loc.gov/item/sn83045462/1962-09-08/ed-1/.

Naacp Legal Defense and Educational Fund. NAACP Legal Defense and Educational Fund Records: Subject File, -1968; Schools; and States; Illinois; School desegregation reports, 1952 to 1956, undated. – 1956, 1952. Manuscript/Mixed Material. https://www.loc.gov/item/mss6557001591/.

The Robbins eagle. (Robbins, IL), Sep. 10, 1960. https://www.loc.gov/item/sn2008060212/1960-09-10/ed-1/.

The Key West citizen. (Key West, FL), Jul. 9, 1963. https://www.loc.gov/item/sn83016244/1963-07-09/ed-1/.

Carl, Jim. “Harold Washington and Chicago’s Schools between Civil Rights and the Decline of the New Deal Consensus, 1955-1987.” History of Education Quarterly 41, no. 3 (2001): 311–43. http://www.jstor.org/stable/369199.

Dionne Danns. 2020. Crossing Segregated Boundaries: Remembering Chicago School Desegregation. New Brunswick, New Jersey: Rutgers University Press. https://research.ebsco.com/linkprocessor/plink?id=a82738b5-aa61-339b-aa8a-3251c243ea76.

Danns, Dionne. “Chicago High School Students’ Movement for Quality Public Education, 1966-1971.” The Journal of African American History 88, no. 2 (2003): 138–50. https://doi.org/10.2307/3559062.

Danns, Dionne. “Northern Desegregation: A Tale of Two Cities.” History of Education Quarterly 51, no. 1 (2011): 77–104. http://www.jstor.org/stable/25799376.

Matthew F. Delmont; Why Busing Failed: Race, Media, and the National Resistance to School Desegregation

Philip T. K. Daniel. “A History of the Segregation-Discrimination Dilemma: The Chicago Experience.” Phylon (1960-) 41, no. 2 (1980): 126–36. https://doi.org/10.2307/274966.

Philip T. K. Daniel. “A History of Discrimination against Black Students in Chicago Secondary Schools.” History of Education Quarterly 20, no. 2 (1980): 147–62. https://doi.org/10.2307/367909.

Paul R. Dimond. 2005. Beyond Busing: Reflections on Urban Segregation, the Courts, and Equal Opportunity. [Pok. ed.]. Ann Arbor: University of Michigan Press. https://research.ebsco.com/linkprocessor/plink?id=76925a4a-743d-3059-9192-179013cceb31.

Thomas J. Sugrue. Sweet Land of Liberty: The Forgotten struggle for Civil Right in the North. Random House: NY.


[1] Thomas J. Sugrue, Sweet Land of Liberty: The Forgotten Struggle for Civil Rights in the North (New York: Random House, 2008),

[2] Philip T. K. Daniel, “A History of the Segregation-Discrimination Dilemma: The Chicago Experience,” Phylon 41, no. 2 (1980): 126–36.

[3]Paul R. Dimond, Beyond Busing: Reflections on Urban Segregation, the Courts, and Equal Opportunity (Ann Arbor: University of Michigan Press, 2005)

  1. [4]Dionne Danns, Crossing Segregated Boundaries: Remembering Chicago School Desegregation (New Brunswick, NJ: Rutgers University Press, 2020)

[5] Jim Carl, “Harold Washington and Chicago’s Schools between Civil Rights and the Decline of the New Deal Consensus, 1955–1987,” History of Education Quarterly 41, no. 3 (2001): 311–43.

[6] The Robbins Eagle (Robbins, IL), September 10, 1960,

[7]   The New York Times, “Fight on the Floor Ruled out,” July 27, 1960, 1.

[8] Dionne Danns, “Northern Desegregation: A Tale of Two Cities,” History of Education Quarterly 51, no. 1 (2011): 77–104.

[9] Matthew F. Delmont, Why Busing Failed: Race, Media, and the National Resistance to School Desegregation (Cambridge, MA: Harvard University Press, 2016).

[10] Dionne Danns, “Chicago High School Students’ Movement for Quality Public Education, 1966–1971,” Journal of African American History 88, no. 2 (2003): 138–50.

[11] NAACP Legal Defense and Educational Fund, Subject File: Schools; States; Illinois; School Desegregation Reports, 1952–1956, Manuscript Division, Library of Congress,

[12] Evening Star (Washington, DC), September 8, 1962,

Camden’s Public Schools and the Making of an Urban “Lost Cause”

In modern-day America, there is perhaps no city quite as infamous as Camden, New Jersey. A relatively-small urban community situated along the banks of the Delaware River, directly across from the sprawling, densely-populated urban metropolis of Philadelphia, in any other world, Camden would likely be a niche community, familiar only to those in the immediate surrounding area. However, the story of Camden is perhaps one of the greatest instances of institutional collapse and urban failure in modern America, akin to the catastrophes that befell communities such as Detroit, Michigan and Newark, New Jersey throughout the mid-twentieth century.

Once an industrial juggernaut, housing powerful manufacturing corporations such as RCA Victory and the New York Shipbuilding Corporation, Camden was perhaps one of the urban communities most integral to the American war effort and eventual victory in the Pacific Theatre in World War II. However, in the immediate aftermath of the war, Camden experienced significant decline, its once-prosperous urban hub giving way to a landscape of disinvestment, depopulation, and despair. By the late twentieth century  – specifically the 1980s and 1990s – Camden had devolved into a community wracked by poverty, crime, and drug abuse, bearing the notorious label “Murder City, U.S.A.” – a moniker which characterized decades of systemic inequity and institutional discrimination as a fatalistic narrative, presenting Camden as a city beyond saving, destined for failure. However, Camden’s decline was neither natural nor inevitable but rather, was carefully engineered through public policy. Through a calculated and carefully-measured process of institutional segregation and racial exclusion, state and city lawmakers took advantage of Camden’s failing economy and evaporating job market to confine communities of color to deteriorating neighborhoods, effectively denying them access to the educational and economic opportunities that had been afforded to white suburbanites in the surrounding area.

This paper focuses chiefly on Camden’s educational decline and inequities, situating the former within a broader historical examination of postwar urban America. Utilizing the historiographical frameworks of Arnold Hirsch, Richard Rothstein, Thomas Sugrue, and Howard Gillette, this research seeks to interrogate and illustrate how segregation and suburbanization functioned as reinforcements of racial inequity, and how such disenfranchisement created the perfect storm of educational failure in Camden’s public school network. The work of these scholars demonstrates that Camden’s neighborhoods, communities, and schools were intentionally structured to contain, isolate, and devalue communities and children of color, and that these trends were not unintended byproducts of natural spatial migration nor economic development. Within this context, it is clear that public education in the city of Camden did not simply mirror urban segregation, but rather institutionalized it as schools became both a reflection and reproduction of the city’s racial geography, working to entrench the divisions drawn by policymakers and real estate developers into a pervasive force present in all facets of life and human existence in Camden.

In examining the influence of Camden’s segregation on public education, this study argues that the decline of the city’s school system was not merely a byproduct, but an engine of institutional urban collapse. The racialized inequitable geography of public schooling in Camden began first as a willful and intentional byproduct of institutional disenfranchisement and administrative neglect, but quickly transformed into a self-fulfilling prophecy of failure, as crumbling school buildings and curricular inequalities became manifestations of policy-driven failure, and narratives of students of color as “inferior” were internalized by children throughout the city. Media portrayals of the city’s school system and its youth, meanwhile, transformed these failures into moral statements and narratives, depicting Camden’s children and their learning communities as symbols of inevitable dysfunction rather than victims of institutional exclusion. Thus, Camden’s transformation into the so-called “Murder Capital of America” was inseparable from the exclusionary condition of the city’s public schools, as they not only bore witness to segregation, but also became its most visible proof and worked to inform fatalistic narratives of the city and moral character of its residents.

            Historians of postwar America have long since established an understanding of racial and socioeconomic as essential to the development of the modern American urban and suburban landscape, manufactured and carefully reinforced throughout the twentieth century by the nation’s political and socioeconomic elite. Foundational studies include Arnold Hirsch’s “Making the Second Ghetto: Race and Housing in Chicago” (1983) and Richard Rothstein’s 1977 text, The Color of Law: A Forgotten History of How Our Government Segregated America serve to reinforce such traditional understandings of postwar urban redevelopment and suburban growth, situating the latter as the direct result of institutional policy, rather than mere byproducts and results of happenstance migration patterns.[1] In The Color of Law, Rothstein explores the role of federal and state political institutions in the codification of segregation through intergenerational policies of redlining, mortgage restrictions, and exclusionary patterns in the extension of mortgage insurance to homeowners along racial lines. In particular, Rothstein focuses on the Federal Housing Administration’s creation of redlining maps, which designated majority Black and Hispanic neighborhoods as high-risk “red zones,” effectively denying residents from these communities home loans, thus intentionally erecting barriers to intergenerational wealth accumulation through homeownership in suburban communities such as Levittown, Pennsylvania.[2]

            Hirsch’s “The Making of the Second Ghetto” echoes this narrative of urban segregation as manufactured, primarily through the framework of his “second ghetto” thesis. Conducting a careful case study of Chicago through this framework, Hirsch argues that local municipalities, urban developers/planners, and the business elite of Chicago worked in tandem to enact policies of “domestic containment,” wherein public housing projects were weaponized against Black and Hispanic communities to reinforce racial segregation throughout the city. Utilizing public housing as an anchor rather than tool of mobility, Chicago’s socioeconomic and political elite effectively conspired at the institutional level with one another to confine Black Chicagoans to closely-regulated low-income communities, devaluing land and property values in these areas whilst zoning more desirable land for redevelopment and suburban growth, thereby manually raising housing and movement costs to a level that Black Americans were simply unable to afford due to the aforementioned devaluation of their own communities as well as generational barriers to wealth accumulation.[3] Chris Rasmussen’s “Creating Segregation in an Era of Integration” applies such narratives to a close investigation of New Brunswick, New Jersey, particularly in regards to educational segregation, investigating how city authorities utilized similar institutional frameworks of racial separation to confine students to segregated schools and resist integration (school zoning, prioritization of white communities and schools for development, and segregationist housing placements), working off of the existing community segregation detailed by the work of Rothstein and Hirsch. [4]

            Working in tandem with historical perspectives of segregation as integral to the development of suburban America and subsequent urban decline, historians have also identified disinvestment as a critical economic process integral to the exacerbation of urban inequality, and eventual decay. Beginning in the postwar era, specifically in the aftermath of World War II and suburban development, industrial urban communities faced significant shortages in employment in the manufacturing sectors, as corporations began to outsource their labor to overseas and suburban communities, often following the migration of white suburbanites. Robert Beauregard’s Voices of Decline: The Post-War Fate of U.S. Cities diverges from the perspectives of Hirsch and Rothstein, citing declining employment opportunities and urban disinvestment as the most important factor in the decline of urban America on a national scale. Beauregard argues that by framing the disinvestment of urban wartime industrial juggernauts such as Newark, Camden, and Detroit as an “inevitability” in the face of rapid deurbanization and the growth of suburban America, policymakers at the national and local levels portrayed urban decline as a natural process, as opposed to a deliberate conspiracy to strip employment opportunities and the accumulation of capital from urban communities of color, even before suburbanization began to occur on a large scale.[5] Thomas Sugrue’s Origins of the Urban Crisis: Race and Inequality in Postwar Detroit also adheres to this perspective, situating economic devastation in the context of the development of racially-exclusive suburban communities, thereby working to tie existing scholarship and the multiple perspectives expressed here together, crafting a comprehensive narrative of urban decline in mid-twentieth century America as recurrent in nature, a cycle of unemployment, abject poverty, and a lack of opportunity that was reinforced by public policy and social programs that in theory, were supposed to alleviate such burdens.[6]

            Ultimately, while these sources focus on differing aspects of urban decline, they all work in tandem with one another to allow for a greater, comprehensive portrait of the causes of urban decay in postwar America, throughout the twentieth century. From deindustrialization to segregation and its influence on disparities in education, these sources provide absolutely essential context for an in-depth examination of the specific case study of Camden, New Jersey both in regards to the city itself, but also its public education system. While these sources may not all cite the specific example of Camden, the themes and trends identified each ring true and featured prominently in the story of Camden throughout this period.

            However, this paper will function as a significant divergence from such pre-existing literature, positioning the failure of public education in Camden as a key factor in the city’s decline, rather than a mere byproduct. A common trend present in much of the scholarship discussed above is that educational failure is examined not as a contributing root to Camden’s decline (and certainly not an important one, when education is briefly discussed in this context), but rather as a visible, tangible marker of urban decay in the area. While this paper does not deny the fact that failures in education are certainly rooted in fundamental inequity in urban spaces and broader social failings, it instead seeks to position Camden’s failing education state as not only a result of  urban decline, but as a contributor – specifically by engaging in a discussion of how educational failure transformed narratives around Camden as a failed urban community, beyond help and destined for ruin. In doing so, this paper advances a distinct argument: that Camden’s educational collapse must be understood not merely as evidence of urban decline, but as a foundational force that actively shaped—and in many ways intensified—the narrative of Camden as a city fated for failure.

Prior to launching into an exploration of Camden’s public schooling collapse and the influence of such failures of institutional education on the city’s reputation and image, it is important to first establish a clear understanding of the context of such shortcomings.  Due to this paper’s focus specifically on the institutional failure of Camden’s public schooling system, and how such failures shaped perceptions around the city as an urban lost cause, this section will focus primarily on rising rates of racial segregation in the mid-twentieth century, both within city limits and beyond, specifically in regards to Camden County’s sprawling network of suburban communities. While the factors of deindustrialization, economic failure, and governmental neglect absolutely do factor into the creation of an urban environment situated against educational success, racial segregation was chiefly responsible for the extreme disparities found in educational outcomes through the greater Camden region, and is most relevant to this paper’s discussion of racialized narratives of inevitable urban failure that proved to be so pervasive on a national scale regarding Camden, both within the mid-to-late twentieth century and into the present day.

Such trends date back to massive demographic transitions of the pre–World War II era was the Great Migration – the mass movement of Black Americans to northern industrial cities. Drawn by the promise of stable employment and the prospect of greater freedom and equality than was available in the Jim Crow South, millions of migrants relocated to urban centers along the Northeastern seaboard. Camden, New Jersey, was among these destinations, attracting a growing Black population throughout the early twentieth century due to its concentration of manufacturing giants such as RCA Victor, the New York Shipbuilding Corporation, and Campbell’s Soup.[7] With the outbreak of war in Europe in 1939—and especially following the United States’ entry into World War II after Pearl Harbor—industrial production in Camden surged. The city soon emerged as a vital hub of wartime manufacturing and domestic production, cementing its status as a key center of American industrial might.

As a direct result of its industrial growth and expanding wartime economy, Camden continued to attract both Black Americans and new immigrant populations, many of whom were of Latino descent. Among these groups were large numbers of Stateside Puerto Ricans, continuing a trend of immigration dating back to the 1917 extension of U.S. citizenship to Puerto Ricans.[8] Motivated by many of the same factors as Black migrants—chiefly the pursuit of steady employment and improved living conditions—these communities helped shape Camden into a diverse and vibrant urban center. The city’s population of color expanded rapidly during this period, its growth driven by wartime prosperity and the allure of industrial opportunity.

Following American victory in the Pacific and the end of World War II, Camden continued to experience rapid economic growth, although tensions arose between the city’s residents during this period along racial-ethnic lines. With the common American enemy of Japan and the Nazis firmly removed from the picture, hostilities began to turn inwards, and racial tensions skyrocketed, especially in the dawn of the Civil Rights Movement. As historian Chriss Rasmussen writes in “Creating Segregation in the Era of Integration: School Consolidation and Local Control in New Brunswick, New Jersey, 1965-1976”, “While Brown and the ensuing civil rights movement pointed toward racial integration, suburbanization forestalled racial equality by creating and reinforcing de facto segregation. As many whites moved to the suburbs, blacks and Latinos remained concentrated in New Jersey’s cities.”[9] Thus, as Black Americans increasingly emerged victorious in the fight against racial injustice and began to accumulate more and more rights and legal protections, city-dwelling white Americans grew increasingly fearful and resentful, spurring a mass exodus from urban population centers – including Camden. Drawn by federally backed mortgages, the expansion of highways, and racially exclusive housing policies,[10] white residents moved to neighboring suburbs such as Cherry Hill, Haddonfield, and Pennsauken, while structural barriers effectively excluded Black and Latino residents from the same opportunities. Leaving for the suburbs in droves, white residents fled from Camden, taking significant wealth and capital, as well as major business with them, thus weakening the city’s financial base and leaving workers—particularly people of color—vulnerable to unemployment.[11]

Public and private institutions increasingly withdrew resources from neighborhoods perceived as declining or racially changing and banks engaged in redlining, denying mortgages and loans to residents in nonwhite neighborhoods, while city budgets prioritized the needs of more affluent suburban constituencies over struggling urban areas.[12] Businesses and developers often chose to invest in suburban communities where white families were relocating, rather than in Camden itself, creating a feedback loop of declining property values, eroding tax revenue, and worsening public services. As historian Robert Beauregard writes in Voices of Decline: The Postwar Fate of U.S. Cities, “…while white middle-class and young working-class households had resettled in suburban areas, elderly and minority and other low-income households remained in the central cities. This increased the demand for basic public services (e.g. education) while leaving city governments with taxpayers having lower earnings and less property to tax.”[13] Thus, Camden residents left behind within the confines of the city became increasingly dependent on social welfare programs, which local and state governments began to fund less and less. This combination of economic retrenchment, racialized perceptions of neighborhood “desirability,” and policy-driven neglect fueled a cycle of disinvestment that disproportionately affected communities of color, leaving the city structurally disadvantaged.[14]

Concerns about racial integration in neighborhoods and schools also motivated many families to leave, as they sought communities aligned with their social and economic preferences. Such demographic change was rapid, and by 1950 approximately 23.8 percent of Camden City’s population was nonwhite.[15] While that figure may not seem extreme to the modern American, an individual likely familiar with diverse communities and perspectives, it is particularly shocking when placed in the context of Camden’s surrounding suburbs: by 1950, the nonwhite population of Pennsauken was a mere 4.5 percent,  2.1 percent in Haddonfield, and an even lower 1.9 percent in Cherry Hill.[16] These figures in particular serve as an exemplary demonstration as to the cyclical nature of segregation in the educational sector within the state of New Jersey, contextualizing twentieth century segregation not as a unique occurrence, but rather a continuation of historical patterns. In the nineteenth century, the majority of the state’s schools were segregated along racial lines, and in 1863, New Jersey’s state government directly sanctioned the segregation of public school districts statewide. While such decisions would ultimately be reversed in 1881, active opposition to integration remained into the twentieth century, particularly within elementary and middle school education. For example, a 1954 study found that New Jersey schools, both historically and actively, “…had more in common with states below than above…” the Mason-Dixon line. Most notably however, by 1940, the state had more segregated schools than at any period prior to the passing of explicit anti-segregation legislation in 1881.[17] Thus, it is evident that the state of Camden’s schools in the mid-twentieth century is not an isolated incident, but rather indicative of the cyclical nature of racial separation and disenfranchisement throughout the state of New Jersey in an educational context.

These demographic and economic shifts had profound implications for Camden’s schools, which now served largely Black and Latino student populations. In particular, Blaustein’s work proves particularly valuable in demonstrating the catastrophic impacts of white flight on Camden’s schools, as well as the irreversible harm inflicted on students of color as a result of institutional failures in education. Writing in a 1963 report to then-President John F. Kennedy’s – a cautious supporter of the Civil Rights Movement – Civil Rights Commission, notable civil rights lawyer Albert P. Blaustein establishes a clear portrait of the declining state of Camden’s public schooling system, as well as the everyday issues facing students and educators alike in the classroom. In delivering a scathing report on neighborhood segregation within the city in Camden, as demonstrated by demographic data regarding the race/ethnicity of students enrolled in public education across the Camden metropolitan area, Blaustein writes:

Northeast of Cooper River is the area known as East Camden, an area with a very small Negro population. For the river has served as a barrier against intracity population…Two of the four junior high schools are located here: Davis, which is 4.0 percent Negro and Veterans Memorial which is 0.2 percent Negro. Also located in East Camden are six elementary schools, four of which are all-white and the other two of which have Negro percentages of 1.3 percent and 19.7 percent…Central Camden, on the other hand, is largely Negro. Thus, the high percentage of Negroes in Powell (100.0 percent), Sumner (99.8 percent), Fetters (91.6 percent), Liberty (91.2 percent), and Whittier (99.1 percent), etc.[18]

Based on the data provided here by Blaustein, it is simply impossible to argue that racial segregation did not occur in Camden. Additionally, it becomes quite clear that while much discussion regarding Camden public schools and wide demographic changes in the city as a whole focuses on the movement of white residents to suburban areas, racial segregation and stratification absolutely did occur within the city, thus worsening educational opportunities and learning outcomes for Camden’s students of color even more.

            However, Blaustein does not end his discussion with segregation amongst student bodies, but rather extends his research even further to a close examination of racial/ethnic compositions of school leadership, including teachers, administrators, and school board members, yielding similar results. For example, according to his work, the Fetters School, possessing a student body of 91.6 percent Black students employed nine white teachers and nine Black teachers in 1960, but two white teachers and sixteen Black teachers in 1963. Even more shockingly, Central School, composed of 72.9 percent Black students, employed only white teachers in 1955. By 1963, just nine years later, this number had completely reversed and the school employed all Black educators.[19] Thus, Blaustein’s investigation of variances in Camden public schools’ racial composition reveal that this issue was not simply limited to education nor exclusionary zoning practices, but was rather an insidious demographic trend which had infested all areas of life in Camden, both within education and outside of classrooms. In ensuring that Black students were only taught by Black teachers and white students by white teachers, education in Camden was incredibly nondiverse, eliminating opportunities for cross-racial understanding nor exposure to alternative perspectives, thereby working to keep Black and white communities completely separate not just in the facets of residence and education, but also in interaction and socialization.

            With the existence of racial segregation both within Camden as well as the city’s surrounding area clearly established, we can now move to an exploration of inequalities in public education within the city. Perhaps one of the most visible and apparent markers of inequalities in public education in Camden can be found in school facilities and buildings. The physical conditions in which children of color were schooled were grossly and completely outdated, especially in comparison to the facilities provided to white children, both inside and outside of the city of Camden. For example, as of 1963, there were six specific public schools that had been cited as in dire need of replacement and/or renovation by Camden’s local legislative board, the vast majority of which were located in segregated communities: Liberty School (1856, 91.2% Black student population), Cooper School (1874, 30.7% Black student population), Fetters School (1875, 91.6% Black student population), Central School (1877, 72.9% Black student population), Read School (1887, 32.0% Black student population), and finally, Bergen School (1891, 45.6% Black student population).[20] Of the schools cited above, approximately half of the buildings that had been deemed by the city of Camden as unfit for usage and nonconducive to education were occupied by majority-Black student populations (Liberty, Fetters, and Central), whereas Bergen School was split just short of evenly between Black and white low-income students.

Additionally, it is important to acknowledge that these figures only account for the absolute worst of Camden’s schools, such trends in inadequate school buildings and facilities occurred throughout the city, in accordance with the general quality of infrastructure and housing present in each neighborhood they were located. In other words, while the data above only references a very small sample size of Camden’s schools, the trends reflected here (specifically, in the intentional zoning of Black students to old, run-down schooling facilities) serve as a microcosm of Camden’s public schools, wherein students of color were intentionally confined to older schools and run-down facilities.

  Education researcher Jonathan Kozol expands on the condition of school facilities in Camden’s disenfranchised communities in his widely-influential book, Savage Inequalities. Written in 1991, Kozol’s work serves as a continuation of Blaustein’s discussion on the failing infrastructure of public education in Camden, providing an updated portrait into the classrooms serving the city’s poorest communities. Kozol pulls no punches in a truly visceral recollection of his visit to Pyne Point Middle School, writing:

…inside, in battered, broken-down, crowded rooms, teem the youth of Camden, with dysfunctional fire alarms, outmoded books and equipment, no sports supplies, demoralized teachers, and the everpresent worry that a child is going to enter the school building armed.[21]

Ultimately, it is inarguable that the physical quality of public schools and educational facilities in Camden was incredibly unequal, reflecting broader residential trends. Where poor, minority-majority neighborhoods experienced a degradation of property values and lived in dilapidated areas of the cities as a direct result of redlining and other racist housing policies, so too were children of color in Camden zoned into old, crumbling school buildings that by this time, barely remained standing, effectively stripping them of the same educational resources and physical comforts provided to white students both in the city and its neighboring suburbs.

            Such inequalities were also present in records of student achievement and morale. Educated in barely-standing school buildings overseen by cash-strapped school districts, students of color in Camden’s poor communities were not afforded nearly the same learning opportunities nor educational resources as white students in the area. In Camden and Environs, Blaustein cites Camden superintendent Dr. Anthony R. Catrambone’s perspective on inequalities in education, writing, “…pupils from Sumner Elementary School (99.8 percent Negro) who transfer to Bonsall Elementary School (50.3 percent Negro) ‘feel unwanted, and that they are having educational problems not experienced by the Negroes who have all their elementary training at Bonsall’ [Catrambone’s words].”[22]

            Thus, it is evident that inequalities in schooling facilities and instruction not only resulted in a considerable achievement gap between students in segregated and integrated communities, but also that such inequalities were clear and demonstrable, even to students themselves at the elementary level. Catrambone’s observation that students from Sumner felt “unwanted” and viewed themselves as struggling, suggests that students in Camden’s segregated neighborhoods internalized the city’s structural inequality, viewing themselves as lesser than their white/integrated peers both in intellectual capacity and personal character. Such perspectives, reinforced by the constant presence of systemic discrimination along racial lines as well as crumbling school facilities and housing units, became deeply entrenched in minds and hearts of Camden’s youth, thereby creating trends of educational failure that were cyclical in nature, reinforced both externally by social structures and institutions as well as internally within segregated communities of color.

            Similarly, dysfunction soon became synonymous with segregated schools and low-income communities of color at the institutional level. School administrators and Boards of Education began to expect failure of students of color, stripping away any opportunity for such schools to prove otherwise. For example, Camden’s school leadership often designated rigorous curriculums and college-preparatory courses to majority-white schools, neglecting to extend the same opportunities to minority-majority districts. For example, in reporting on administrative conversations on the potential integration of Camden High School in 1963, Blaustein observes:

The maintenance of comprehensive academic tracks was recognized by administration as dependent on white students, implying students of color alone were not expected to sustain them: ‘if these pupils [white college preparatory students from the Cramer area] were transferred to Woodrow Wilson [a majority-Black high school located in the Stockton neighborhood], Camden High would be almost entirely a school for business instruction and training in industrial arts.[23]

It is vital to first provide context as to Blaustein’s usage of the terms “business instruction” and “industrial arts.” In utilizing these terms, Blaustein refers primarily to what is referred to as “vocational education” in modern-day America. With this crucial context firmly established, it becomes evident that public educators in early-1960s Camden viewed college education as a racially-exclusive opportunity, to be extended only to white students.

Such attitudes were reflected in the curricular rigor present in Camden’s minority-majority schools which were, to say the least, held to an extremely low standard. The lessons designed for children of color were incredibly simple and non-complex, as schools were treated less as institutions of learning and self-improvement, but rather as detention centers for the city’s disenfranchised youth. As Camden native and historian David Bain writes in the piece Camden Bound, “History surrounds the children of Camden, but they do learn a lot of it in school…Whitman is not read by students in the basic skills curriculum. Few students that I met in Camden High, indeed, had never heard of him.”[24] As such, Black and Hispanic students were effectively set up for failure as compared to white students, viewed as predestined to either not graduate from their primary schooling or to enter lower-paying careers and vocational fields rather than pursue higher education, and opportunities that college afforded students, particularly during this period where college degrees were significantly rarer and highly-valued than in the modern day.

            Thus, it is evident that throughout the mid-twentieth century Camden’s public school system routinely failed Black and Hispanic students. From inequalities in school facilities and curriculum, Camden’s public school system repeatedly communicated to students in segregated areas that they simply were not worth the time and resources afforded to white students, nor possessed the same intellectual capacity as suburban children. Denied quality schools and viewed as predestined high school drop-outs, Camden’s public schools never truly invested in their children, creating an atmosphere of perpetual administrative negligence in improving schools and learning outcomes for the city’s disadvantaged youth. As Blaustein so aptly writes, “‘…the school authorities are against changing the status quo. They want to avoid headaches. They act only when pressures are applied’”.[25]

It is clear that such drastic disparities in learning outcomes arose not only out of administrative negligence, but also as a direct result of segregation within the city. While no law affirming segregation was ever passed in New Jersey, it is clear that schools in Camden were completely and unequivocally segregated, and that a hierarchical structure clearly existed in regards to determining which schools and student populations were most supported and prepared for success. Time and time again, educators favored white students and white schools, kicking students of color and their schooling communities to the curb. It is against this backdrop of negligence and resignation that wider narratives around the city of Camden and its youth as “lost causes” beyond any and all help began to emerge.

By the late twentieth century (specifically the 1980s and 1990s), narratives around Camden as a drug and crime-infested urban wasteland began to propagate, rising to a national scale in the wake of increasing gang activity and rapidly-rising crime rates in the area. While public focus centered on the city’s criminal justice department and woefully-inept political system, reporting on the state of Camden’s public schools served to reinforce perceptions of the city as destined for failure and beyond saving, chiefly through local press’ demonization of Camden’s youth. For example, the Courier Post article “Battle being waged to keep youths from crime”, reads, “‘Girls are being raped in schools, drugs are proliferating, alcohol is proliferating, and instead of dealing with it, some parents and administrators are in denial…they insist it’s not happening in their backyard’”.[26] The manner in this author speaks of public schooling in Camden reads as though the city’s schools and places of education were not learning communities, but rather prisons – the students inhabiting these spaces not children, but prisoners, destined to be nothing more than a “thug”.

  Ignoring the city’s long history with racial segregation and redlining, which as established earlier in this paper, clearly resulted not only in disparities in learning outcomes but also caused a deep internalization of institutional failure within many students of color and their learning communities, articles such as this neglect the willingness to truly explore the roots of crime and poverty in Camden, focusing instead on the result of decades of institutional neglect of communities of color, rather than the root cause of these issues. In doing so, media coverage of such failures in Camden removed the burden of responsibility from the city lawmakers and school administrators responsible for abject poverty and educational disparities, instead putting the onus on the communities which were intentionally and perpetually disenfranchised at the institutional level across all aspects of Camden’s sociopolitical network.

Additionally, this article’s veiled assertion of Camden parents as disinterested and uninvested in their children’s success is especially gross and inaccurate. The fact of the matter is that parents and local communities within even the most impoverished and crime-ridden neighborhoods of Camden had long-lobbied for improvements to public schooling and their communities, concerned chiefly with their children’s futures and opportunities. For example, by the late 1990s, Camden City’s charter network had experienced significant growth, much of its early success owed directly to parents and grassroots organizations devoted to improving the post-schooling opportunities of disadvantaged children. In 1997, over seventeen new charters were approved by the city of Camden, the first opening in September of that year. The LEAP Academy University Charter School was the result of years of political lobbying and relentless advocacy, of which the loudest voices came from parents and community activist groups. Spearheaded by Rutgers University-Camden professor and city native, Gloria Bonilla-Santiago, the LEAP Academy included specific parent action committees, community outreach boards, and sponsored numerous community service events.[27] Thus, this inclusion of virtually one of the only groups truly invested in children of color’s success in Camden alongside the group which repeatedly conspired to confine them to crumbling schools and prepare them only for low-paying occupations is wildly inaccurate and offensive in a historical context, thereby demonstrating how media narratives around Camden and its school system repeatedly disregarded factually-correct reporting, in favor of sensationalized reports on Camden’s struggles, framing schools and city youth as ground zero and progenitors of the wider issues facing the city as a whole.

While community activism was absolutely present across Camden, it is also important to highlight the damaging impact of such negative narratives surrounding the city on its residents. In his book Camden Bound, a literary exploration of the history of Camden and its community, Camden-born historian David Bain highlights the internalization of damaging, sensationalized descriptions of Camden. He writes:

For most of my life, my birthplace, the city of Camden, has been a point of irony, worth a wince and often hasty explanation that though I was born in Camden, we didn’t actually ever live in Camden, but in a succession of pleasant South Jersey suburban towns…As I moved through life…I would write out the name Camden (I’m ashamed to name my shame now) with a shudder.[28]

While Bain’s Camden Bound does relate specifically to his own individual experience and struggle with the acknowledgement of his birthplace in the wake of national infamy, he spends perhaps even more time exploring the current state of the city, as well as the perspectives of current Camden residents. In recounts his most recent visit to Camden, Bain describes nothing short of absolute devastation and complete social blight and urban decay, writing:

Too many newspaper headlines crowd my brain – “Camden Hopes for Release From Its Pain”; “In Struggles of the City, Children Are Casualties”; “Camden Forces Its Suburbs To Ask, What If a City Dies?”; “A Once Vital, Cohesive Community is Slowly, but Not Inevitably, Dying.” And that devastating question from Time: “Who Could Live Here?”…It has been called the poorest city in New Jersey, and some have wondered if it is the poorest in the nation. Adult men and women stand or sit in front of their shabby two- story brick houses, stunned by purposelessness. In abandoned buildings, drug dealers and their customers congregate. On littered sidewalks, children negotiate through broken glass, condoms, and spent hypodermics.[29]

Judging from Bain’s simple description of the sights that he witnessed while driving through Camden, it is evident that Camden’s residents have been burned out by the widely-circulating narratives of the city and its national infamy. The vast majority of residents poverty-stricken and lacking the financial or social capital to create meaningful change for their communities themselves, such headlines and narratives of the city were nothing short of absolutely devastating. Such soul-crushing portrayals signal yet another air of perpetual negligence and resignation by powerful voices, within the media, local politics, and even national government, thus demonstrating a national perception of Camden as “failed”, and were thus internalized by Camden’s residents.

For example, in interviewing Rene Huggins, a community activist and director of the Camden Cultural Center, Bain chiefly relays her frustration with recent state legislation upon the assumption of office by Republican governor Christine Todd Whitman and recent rollbacks of welfare programs, occupational training, and educational funding that had been promised to the city. Speaking on the increasing hopelessness of many city residents, Huggins states, “And on top of all that…we get that headline in Time magazine – ’Who Could Live Here?’ Why not just give us a lot of shovels and bury the place?’”.[30] Such statements, alongside Bain’s experiences of Camden, thus demonstrate that as a direct result of national resignation to the state of Camden and a lack of willingness nor initiative to improve the city (and even more damaging, a removal of resources and social initiatives designed specifically to improve the state of the city), many Camden residents adopted a similar mentality of resignation and shame toward their community, choosing to simply exist with the city’s misery as opposed to creating any real, meaningful change, having been spurned and failed by various powerful sociopolitical institutions and organizations across generations, thereby reinforcing the harmful narratives that had played such a crucial role in the development of such behaviors.

The very article mentioned in ire by Ren Huggins, Kevin Fedarko’s “Who Could Live Here?”, also offers insight into public perceptions of Camden and more specifically, its youth, during the late twentieth-century. Written in 1992, Fedarko postures the city of Camden as a barren wasteland and its inhabitants – predominantly young people and children – as akin to nothing more than prisoners and criminals. For example, Fedarko writes:

The story of Camden is the story of boys who blind stray dogs after school, who come to Sunday Mass looking for cookies because they are hungry, who arm themselves with guns, knives and — this winter’s fad at $400 each — hand grenades. It is the story of girls who dream of becoming hairdressers but wind up as whores, who get pregnant at 14 only to bury their infants.[31]

Fedarko’s description of Camden’s children is extraordinarily problematic, in that it not only treats the city’s youth as a monolithic group, but then proceeds to demonize them en masse. In describing the city’s young people as baselessly sadistic and violent, while neglecting to position rising youth crime rates in the context of historical disenfranchisement nor take a moment and pause to acknowledge that this is not the case for all of the city’s young people, Fedarko’s work only furthers narratives of Camden and its young people as lawless and destined for jail cells rather than degrees. In particular, Fedarko’s description of Camden’s young women as “whores” is especially gross, considering the fact that the people of whom Fedarko speaks are children, thereby applying unnecessary derogatory labels to young women (largely women of color), while failing to acknowledge the true tragedy of Camden and the conditions to which young people are subjected to. In describing the situation of a teenager involved in gang activity, Fedarko also employs similarly disrespectful and dehumanizing language, writing:

…drug posses …use children to keep an eye out for vice- squad police and to ferry drugs across town. Says “Minute Mouse,” a 15- year-old dealer: “I love my boys more than my own family.” Little wonder. With a father in jail and a mother who abandoned him, the Mouse survived for a time by eating trash and dog food before turning to the drug business.[32]

Ultimately, it is evident that during the late twentieth century, specifically the eighties and nineties, narratives surrounding Camden portrayed the city as nothing more than an urban wasteland and lost cause, a sad excuse for urban existence that eschewed its history as a sprawling manufacturing juggernaut. More damaging however, were narratives surrounding the people of Camden (especially youth), who became synonymous with violence and criminal activity, rather than opportunity or potential. In short, media coverage of Camden was concerned chiefly with the concept of an urban space and people in chaos and thus, prioritized the spectacle of Camden’s failures over the historical tragedy of the city, neglecting to situation the former in the context of self-imposed de facto segregation and racialized disenfranchisement.

Ultimately, it cannot be denied that perceptions of Camden’s public education system as failing and its youth as morally debased were absolutely essential to the formulation of “lost cause” narratives regarding the city. In the popular imagination, Camden became synonymous with decay and dysfunction—a city transformed from a thriving industrial hub into what national headlines would later call “Murder City, U.S.A.” However, these narratives of inevitability in truth emerged from the city’s long history with racial segregation, economic turmoil, and administrative educational neglect. Camden’s schools were central to this development, acting as both products and producers of inequity, serving as clear symbols of the failures in public policy, which were later recast as moral shortcomings of disenfranchised communities themselves.

As demonstrated throughout this study, the structural roots of Camden’s failures in public education were grounded in segregation, manufactured by the same redlining maps and exclusionary residency policies that confined families of color to the city’s most desolate neighborhoods, which would also determine the boundaries of their children’s schools. White flight and suburban migration drained Camden of its capital and tax base, instead concentrating such resources in suburban communities whose already-existing affluence was only reinforced by federal mortgage programs and social support. Historical inquiry into urban decline and the state of urban communities in the postwar period have long since emphasized the importance of understanding urban segregation not as a natural social phenomenon, but rather an architectural inequity, extending into every aspect of civic life and education. Camden’s experience confirms this: segregation functioned not only as a physical division of space but as a moral and ideological one, creating the conditions for policymakers and the media to portray the city’s public schools as evidence of cultural pathology rather than systemic betrayal.

By the late twentieth century, these narratives had become fatalistic. Newspaper headlines depicted Camden’s classrooms as sites of chaos and its youth as violent, transforming real inequities into spectacle. The children who bore the weight of these conditions—students of color educated in crumbling buildings and underfunded programs—were cast as perpetrators of their city’s demise rather than its victims. The label “Murder Capital” distilled these complexities into a single, dehumanizing phrase, erasing the structural roots of decline in favor of a narrative that made Camden’s suffering appear inevitable. In doing so, public discourse not only misrepresented the city’s reality but also justified further disinvestment, as policymakers treated Camden’s collapse as a moral failure rather than a product of policy.

However, despite such immense challenges and incredibly damaging narratives that had become so deeply entrenched in the American national psyche regarding the city, Camden and its inhabitants persisted. Refusing to give up on their communities, Camden’s residents, many of whom lacking the influence and capital to create change alone, chose to band together and weather the storm of national infamy. From community activism to political lobbying, Camden’s communities of color demonstrated consistent self-advocacy. Viewing outside aid as perpetually-promised yet never provided, Camden’s communities pooled their resources and invested in their own communities and children, establishing vast charter networks as well as advocating for criminal justice reform and community policing efforts.

While change was slow and seemingly unattainable, Camden has experienced a significant resurgence in the past decade or so. From investment by major corporations and sports organizations (for example, the Philadelphia 76ers’ relocation of their practice facilities and front offices to the Camden Waterfront in 2016) as well as a revitalization of educational access and recruitment of teaching professionals by the Camden Education Fund, the city has slowly begun to reverse trends of decay and decline, pushing back against narratives that had deemed its failure as inevitable and inescapable. Celebrating its first homicide-free summer this year, Camden’s story is tragic, yet far from over. Rather than adhere to the story of persistent institutional failure and disenfranchisement, Camden’s residents have chosen to take charge of the narrative of their home and communities for themselves, changing it to one of perseverance, determination, and strength. In defiance of decades of segregation, disinvestment, and stigma, Camden stands not as America’s “Murder City,” but as its mirror—a testament to how injustice is built, and how, through resilience, effort, and advocacy, it can be torn down.

 “The case for charter schools,” Courier Post, March 02, 1997

Bain, David Haward. “Camden Bound.” Prairie Schooner 72, no. 3 (1998): 104–44. http://www.jstor.org/stable/40637098 

Beauregard, Robert A. Voices of Decline: The Postwar Fate of U.S. Cities. 2nd ed. New York: Routledge, 2003 http://www.123library.org/book_details/?id=112493

Blaustein, Albert P., and United States Commission on Civil Rights. Civil Rights U.S.A.: Public Schools: Cities in the North and West, 1963: Camden and Environs. Washington, DC: United States Commission on Civil Rights, 1964.

Douglas, Davison M. “The Limits of Law in Accomplishing Racial Change: School Segregation in the Pre-Brown North.” UCLA Law Review 44, no. 3 (1997): 677–744.

Fedarko, Kevin. “The Other America.” Time, January 20, 1992. https://content.time.com/time/subscriber/article/0,33009,974708-3,00.html

Gillette, Howard. Camden after the Fall: Decline and Renewal in a Post-Industrial City. Philadelphia: University of Pennsylvania Press, 2005.

Goheen, Peter G., and Arnold R. Hirsch. “Making the Second Ghetto: Race and Housing in Chicago, 1940-1960.” Labour / Le Travail 15 (1985): 234. https://doi.org/10.2307/25140590

Kozol, Jonathan. Savage Inequalities: Children in America’s Schools. New York: Broadway Books, 1991.

Rasmussen, Chris. “Creating Segregation in the Era of Integration: School Consolidation and Local Control in New Brunswick, New Jersey, 1965–1976.” History of Education Quarterly 57, no. 4 (2017): 480–514. https://www.jstor.org/stable/26846389

Rothstein, Richard. The Color of Law : A Forgotten History of How Our Government Segregated America. First edition. New York: Liveright Publishing Corporation, a division of W.W. Norton & Company, 2017.

Sugrue, Thomas J. The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit. Princeton, NJ: Princeton University Press, 1996.

Tantillo, Sara. “Battle being waged to keep youths from crime,” Courier Post, June 8, 1998

Yaffe, Deborah. Other People’s Children: The Battle for Justice and Equality in New Jersey’s Schools. New Brunswick, NJ: Rivergate Books, 2007. https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=225406


[1] Peter G. Goheen and Arnold R. Hirsch. “Making the Second Ghetto: Race and Housing in Chicago, 1940-1960.” Labour / Le Travail 15 (1985): 234.

[2] Richard Rothstein. The Color of Law : A Forgotten History of How Our Government Segregated America. First edition. New York: Liveright Publishing Corporation, a division of W.W. Norton & Company, 2017.

[3] Peter G. Goheen and Arnold R. Hirsch. “Making the Second Ghetto: Race and Housing in Chicago, 1940-1960.” Labour / Le Travail 15 (1985): 234.

[4] Chris Rasmussen. “Creating Segregation in the Era of Integration: School Consolidation and Local Control in New Brunswick, New Jersey, 1965–1976.” History of Education Quarterly 57, no. 4 (2017): 480–514.

[5] Robert A. Beauregard. Voices of Decline: The Postwar Fate of U.S. Cities. 2nd ed. New York: Routledge, 2003.

[6] Thomas J. Sugrue. The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit. Princeton, NJ: Princeton University Press, 1996.

[7] Howard Gillette, Camden after the Fall: Decline and Renewal in a Post-Industrial City (Philadelphia: University of Pennsylvania Press, 2005), 12–15.

[8] David Howard Bain, “Camden Bound,” Prairie Schooner 72, no. 3 (1998): 104–44.

[9] Chris Rasmussen,. “Creating Segregation in the Era of Integration: School Consolidation and Local Control in New Brunswick, New Jersey, 1965–1976.” History of Education Quarterly 57, no. 4 (2017): p.487

[10] Richard Rothstein, The Color of Law: A Forgotten History of How Our Government Segregated America (New York: Liveright, 2017), 70–75; Gillette, Camden after the Fall, 52–54.

[11] Gillette, Camden after the Fall, 45–50; Bain, “Camden Bound,” 110–12.

[12] Thomas J. Sugrue, The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit (Princeton, NJ: Princeton University Press, 1996), 35–40.

[13] Beauregard, Robert A. Voices of Decline : The Postwar Fate of U.S. Cities. Second edition. New York: Routledge, 2003, 91

[14] Gillette, Camden after the Fall, 50–55; Bain, “Camden Bound,” 120.

[15]Albert P. Blaustein, Civil Rights U.S.A.: Camden and Environs, report to the U.S. Civil Rights Commission, 1963, 22.

[16] Blaustein, Civil Rights U.S.A., 23–24.

[17]Davison M. Douglas, “The Limits of Law in Accomplishing Racial Change: School Segregation in the Pre-Brown North.” UCLA Law Review 44, no. 3 (1997)

[18] Blaustein, Civil Rights U.S.A., 18.

[19] Blaustein, Civil Rights U.S.A., 18.

[20] Blaustein, Civil Rights U.S.A.,

[21] Kozol, Jonathan. Savage Inequalities : Children in America’s Schools. New York: Broadway Books, an imprint of the Crown Publishing Group, a division of Random House, Inc., 1991.

[22] Blaustein, Civil Rights U.S.A., 22.

[23] Blaustein, Civil Rights U.S.A.,

[24] Bain, David Haward. “Camden Bound.” Prairie Schooner 72, no. 3 (1998): 120-121.

[25] Blaustein, Civil Rights U.S.A.,

[26] “Battle being waged to keep youths from crime,” Courier Post, June 8, 1998

[27] Sarah Tantillo, “The case for charter schools,” Courier Post, March 02, 1997

[28] Bain, Camden Bound, 108-109.

[29] Bain, Camden Bound, 111.

[30] Bain, Camden Bound, 119.

[31] Kevin Fedarko, “The Other America,” Time, January 20, 1992

[32] Ibid.

From Right to Privilege: The Hyde Amendment and Abortion Access

When the Supreme Court made their decision on Roe v. Wade in 1973, it seemed as though abortion had finally been secured as a constitutional right. However, this ruling came after more than a century of contested abortion law in the United States. Beginning in the late nineteenth century, the American Medical Association led campaigns to criminalize abortion, which pushed midwives and women healers out of reproductive care[1]. Illegal abortion had been widespread and dangerous; even in the early twentieth century, physicians estimated that thousands of abortions were done annually, many of them resulting in septic infections and hospitalizations[2]. Long before Roe, access to reproductive care was already shaped by race and class, as Rickie Solinger shows in her study of how unwed pregnancy was treated differently for white women and women of color[3]. Within a few years after the Roe v. Wade decision, the promise of abortion access was strategically narrowed. In 1976, Congress passed the Hyde Amendment, which banned the use of federal Medicaid funds for most abortions. This did not overturn Roe v Wade, but it did quietly transform abortion from a legal right into an economic privilege, one that poor women could rarely afford to exercise. As Susan Gunty bluntly stated, “The Hyde Amendment did not eliminate abortion; it eliminated abortion for poor women.”[4] The Hyde Amendment redefined abortion rights by turning a constitutional guarantee into a privilege dependent on income. It represented a shift in strategy among anti-abortion advocates, where instead of directly challenging Roe, they targeted public funding.[5] Representative Henry Hyde himself admitted that his goal was total abortion prohibition, but that the only vehicle available was the Medicaid bill.

Historian Maris Vinovskis emphasizes that this marked a turning point where anti-abortion lawmakers learned to restrict access not by banning abortion, but by eliminating the means to obtain it. They used the appropriations process to accomplish “what could not be achieved through constitutional amendment.”[6] By embedding abortion restrictions into routine spending bills, lawmakers created a powerful way to undermine Roe without technically violating it. An immediate effect was the creation of a two-tiered system of reproductive rights. Wealthier women could continue to obtain abortions, while lower income women, like those on Medicaid, were forced to carry their pregnancies to term. The Supreme Court validated this in Maher v. Roe in 1977 and in Harris v. McRae in 1980, maintaining that while the Constitution guaranteed the right to abortion, it did not require the government to make that right financially accessible to all. As the court stated, the government “need not remove [obstacles] not of its own creation.”[7] This logic fit neatly into the rise of the New Right. The fetus was being recast as a protected moral subject, and as Sara Dubow describes it, it was transformed into “a symbol of national innocence and moral purity.”[8] At the same time, historian Linda Gordon brings up that public funding has never been neutral, and has always reflected judgements about which women should bear children and which should not.[9] In this way, Hyde did not invent reproductive inequality, but it definitely sharpened it.

This raises the question of how the Hyde Amendment reshaped abortion access in the United States between 1976 and 1999, and why it disproportionately affected poor women and women of color. This paper argues that the Hyde Amendment transformed abortion from a constitutional right into an economic privilege. By restricting Medicaid funding, the amendment created a two-tier system of reproductive access in which poor women and women of color were effectively denied the ability to exercise a legal right.

Historians who study reproduction agree that abortion in the United States has always been shaped by race, class, and power. Linda Gordon shows that reproductive control has never been distributed equally, as wealthier white women have long had greater access to contraception and abortion, while poor women and women of color faced barriers or state interference[10]. Johanna Schoen adds to this by examining how public health and welfare systems sometimes pushed sterilization or withheld care, showing that the state has often intervened most heavily in the reproductive lives of marginalized women[11]. Together, these historians argue that Hyde fits into a much older pattern of the government regulating the fertility of women who had the least political power.

Another group of historians focuses on law, policy, and the political meaning of abortion in the late twentieth century. Michele Goodwin analyzes how legal frameworks that claimed to protect fetal life often limited women’s autonomy, especially poor women[12]. Maris Vinovskis explains how anti-abortion lawmakers learned to use the appropriations process to restrict abortion access without challenging Roe directly[13]. Meanwhile, Sara Dubow traces how the fetus became a powerful cultural symbol, which helped conservatives rally support for funding restrictions like Hyde[14]. These scholars help explain how Hyde gained legitimacy both legally and culturally, and why it became such a durable policy.

A third set of historians look at activism, feminism, and the reshaping of abortion politics in the 1970s and 1980s. Rosalind Petchesky shows how abortion became central to the rise of the New Right, as antifeminist and religious groups used the issue to organize a broader conservative movement[15]. Loretta Ross and other reproductive justice scholars explain how women of color challenged the narrow “choice” framework of the mainstream pro-choice movement, arguing that legality meant little without the resources needed to make real decisions[16]. Their work highlights that Hyde did not only restrict abortion for poor women, but also pushed activists to rethink what reproductive rights should even look like.

Taken together, these historians show three major themes of long-standing inequality in reproductive politics, legal tools reinforcing those inequalities, and the political shifts that made Hyde a defining part of conservative identity. What is still less explored, and where this paper enters the conversation, is how the Hyde Amendment created a two-tier system of abortion access between 1976 and 1999, and how that funding gap turned a constitutional right into an economic privilege. This paper brings these together to show how policy, law, and inequality reshaped the meaning of abortion rights in the United States.

The Hyde Amendment did not appear out of nowhere, and rather developed in a very particular political movement where abortion had become one of the most emotionally charged issues in American politics.[17] After Roe v. Wade legalized abortion nationwide in 1973, opponents of abortion had to reconsider their strategy.[18] They could no longer rely on state criminal bans, since they were now unconstitutional. Therefore, instead of attempting to outlaw abortion directly, they began to look for indirect ways to limit who could actually get one. The question became not whether abortion was legal or unconstitutional, but whether it was accessible.[19] This shift happened at the same time that the country was experiencing a wave of distrust towards the federal government after Watergate, along with concerns about inflation and federal spending.[20] Additionally, as a movement over family values escalated, the federal government was infringing on families’ privacy and rights. These anxieties made it easier to frame abortion as both a moral issue, and a financial one as well. Historian Maris Vinovskis notes that the Hyde Amendment represented a new strategy, shifting away from trying to overturn Roe and towards “an effort to restrict the practical ability to obtain abortions through funding limitations.”[21] Anti abortion lawmakers realized that they were still able to limit abortions by cutting off the financial aid that allowed poor women to get them.[22]

To understand this shift, it is important to recognize that the abortion debate had already intensified in the years leading up to Roe. During the late 1960s and early 1970s, Catholic organizations such as the National Right to Life Committee had begun mobilizing against abortion laws in states like New York and California[23]. At the same time, Medicaid, which was created in 1965 as part of the War on Poverty, became central to debates about welfare spending and the moral regulation of poor women[24]. Because Medicaid disproportionately served low income women and women of color, it became an early battleground for questions about who deserved state funded healthcare and reproductive autonomy.

Representative Henry Hyde was the first major figure behind this effort, and he did not try to hide his intentions. During debate in Congress, he stated “I certainly would prevent, if I could legally, anybody having an abortion; unfortunately, the only vehicle available is the Medicaid bill.”[25] He made it clear that the Amendment was not about government budgeting or fiscal responsibility, but was about restricting abortion access by targeting low income women who depended on Medicaid.[26] This strategy also lined up quite well with emerging political alliances, as fiscal conservatives who opposed federal spending could support Hyde because it reduced a publicly funded service.[27] At the same time, religious conservatives who morally opposed abortion also supported Hyde. The idea of “taxpayer conscience”, or that people should not have to financially support something they disagree with, became an effective talking point.[28] However, this strategy also drew on a much longer history of the government controlling the reproductive lives of women, especially poor women and women of color. Nellie Gray, the March for Life national director, made a statement in a 1977 news journal explaining that “pro-life organizations will only have one chance at a human rights amendment and they must do it right by seeing to it that abortion is not permitted in the United States[29].” Gray’s warning reflected how strongly anti-abortion leaders viewed Hyde as a stepping stone toward a much larger project of restricting abortion nationwide. Her statement also highlighted the growing belief among conservative activists that federal funds could be used to reshape reproductive policy, which would disproportionately affect the same women who have already consistently been targeted.

Throughout the twentieth century, the state encouraged childbirth among white, middle class women while discouraging it among women considered “undesirable”, which often meant poor women, Black women, Native American women, etc.[30] In this sense, the Hyde Amendment fit into an existing pattern of allowing privileged women to maintain reproductive autonomy, while placing the greatest burden onto those already facing economic and racial inequality. The structure of the Amendment also built inequality directly into access. Since Hyde was attached to the federal appropriations bill for health and welfare, it had to be renewed every year.[31] This meant that each year, Congress debated what exceptions should be allowed, and whether Medicaid would cover abortion in cases of rape, incest, or a threat to the mother’s life.[32] These exceptions were often extremely limited, difficult to qualify for, or inconsistent across states.[33] When in practice, they rarely resulted in meaningful access.

The impact of Hyde was immediate and severe. There was an enormous drop in Medicaid funded abortions, and while states were technically allowed to use their own funds to pay for abortion services, most did not.[34] As a result, abortion access quickly became dependent not only on personal income, but also on geography. A woman’s ability to exercise a supposedly constitutional right now depended on which state she lived in and whether she had the financial means to pay out of pocket.

By the late 1970s, the Hyde Amendment had created a two-tiered system of reproductive access. Abortion was still legal, but the ability to obtain one became tied to class and race.[35] For many women on Medicaid, especially Black and Latina women who were already disproportionately represented among low-income populations, the right to choose existed only in theory.[36] What Hyde actually accomplished was a shift from abortion as a universal and constitutional right to abortion as something you had to be able to afford. In this way, Hyde did not just restrict funding, it redefined what rights meant in the United States. It showed that a right could remain legally intact, yet still be functionally unreachable for some.

After the Hyde Amendment was passed in 1976, it quickly faced legal challenges from abortion rights advocates who argued that cutting off Medicaid funding violated the constitutional protections that Roe v. Wade put in place.[37] Their basic argument being that if the government recognized the right to choose abortion, then it should not be allowed to create conditions that made that right impossible to exercise.[38] In other words, they argued that a right without access is not really a right at all. However, when these cases reached the Supreme Court, the Court ultimately sided with the federal government, which confirmed that the state could acknowledge a right while also refusing to make it materially available. The first major decision was Maher v. Roe in 1977. This case involved a Connecticut rule that denied Medicaid funding for abortions even when the state continued to cover costs associated with childbirth.[39] The plaintiffs argued that this policy violated the Equal Protection Clause by treating poor women differently from those who could pay privately.[40] However, the Supreme Court rejected this argument, and in the majority opinion stated that “the Constitution does not confer an entitlement to such funds as may be necessary to realize the full advantage of the constitutional freedom.”[41] This reveals the Court’s broader stance, as the justices separated the idea of a right from the state’s obligation to make that right actually meaningful. By framing funding as an “entitlement,” the Court implied that financial accessibility was a luxury, not a constitutional requirement. This language helped transform abortion from a guaranteed right into a conditional one, depending on a woman’s financial status.

This reasoning set the stage for a more consequential case, Harris v. McRae. In 1980, this case dealt specifically with the constitutionality of the Hyde Amendment.[42] The plaintiffs again argued that denying Medicaid funding effectively denied the right to abortion to poor women. They also argued that Hyde violated the Establishment Clause because it reflected religious beliefs, particularly those of the Catholic Church.[43] However, the Court upheld the Amendment, and Justice Potter Stewart wrote for the majority, stating that although the government “may not place obstacles in the path of a woman seeking an abortion, it need not remove those not of its own creation.”[44] This distinction allowed the Court to reinterpret poverty not as a structural condition shaped by state policy but as an individual misfortune that is outside of constitutional concern. Fayle Wattleton, president of Planned Parenthood Federation, challenged the courts findings, stating that “the court has reaffirmed that all women have a constitutionally protected right to an abortion, but has denied poor women the means by which to exercise that right[45].” Scholars like Michele Goodwin have also argued that this logic effectively weaponized economic inequality by making it a neutral, legally permissible barrier to reproductive autonomy[46]. The court drew a clear line between legal rights and material access, claiming that the Constitution protects the first and not the second.

The distinction between rights and access became one of the most influential and damaging ideas in later abortion policy. The Court’s logic suggested that if poverty prevented a woman from obtaining an abortion, that was simply her personal situation and not something the government was responsible for addressing.[47] Though for poor women, this effectively meant that the right to abortion was conditional on wealth. Justice Thurgood Marshall pointed this out directly in his dissent, arguing that the decision reduced the right to choose to “a right in name only for women who cannot afford to exercise it.”[48] Marshall understood that legal recognition was meaningless when economic barriers stood in the way. Historians and legal scholars have also pointed out that these rulings reflected broader anxieties about welfare and poor women’s reproductive autonomy. Johanna Schoen notes that after Hyde, “the issue was no longer legality but economic access. The ability to choose became a measure of one’s class position.”[49] The Court’s decisions essentially cast poverty as a private problem, not a systemic barrier. By accepting the argument that the state did not have to fund abortions, the Court allowed economic inequality to become a legal tool for shaping reproductive outcomes.

The Harris decision also intensified racial disparities in reproductive healthcare, and since women of color were disproportionately represented among Medicaid recipients, they experienced the most direct consequences of the Amendment. Linda Gordon argues that policies like Hyde fit into a longer pattern where the state has “regulated fertility more tightly among poor women and women of color.”[50] This meant that Hyde did not simply limit abortion funding, but it also reinforced existing racial and economic hierarchies within reproductive control. The immediate impact of these decisions can clearly be seen in the data. In states that fully implemented the Hyde restrictions, Medicaid funded abortions dropped by more than ninety nine percent, essentially disappearing within the first year.[51] Clinics that had relied on Medicaid reimbursement closed, and in many communities, the nearest clinic became hours away.[52] For low income women, the cost of travel, time off from work, and childcare created many new layers of burden on top of the medical expense itself.[53]

Once the Supreme Court upheld the Hyde Amendment in Harris v McRae, abortion access in the United States became uneven, and heavily dependent on geography and income. Even though Roe v. Wade technically still guaranteed the constitutional right to abortion, the Hyde Amendment meant that states were able to decide whether they would use their own funds to support abortion services for Medicaid recipients. This resulted in what many scholars describe as a patchwork system of reproductive access, where a woman’s ability to exercise her rights depended on her ZIP code and her bank account instead of a universal legal standard.[54] Since Black, Latina, and Native women were disproportionately represented among low income Medicaid recipients, it is clear that the restrictions had a racial impact, even if the policy did not mention race outright.

This pattern was not new, as Johanna Schoen writes that “the state has historically encouraged childbirth among white, middle class women while discouraging it among poor women and women of color.”[55] Hyde simply reshaped that older system into a modern one, using funding instead of forced sterilization or criminal statutes. Public funding decisions always reflect judgements about who should reproduce and who should not, or in other words, which lives were valued and which were not.[56] Meanwhile, the procedures themselves became more expensive and more difficult to access. Without Medicaid coverage, many women had to delay their abortions while they gathered money to pay for the procedure. This then led to abortions being performed at later gestational stages which made them more medically complicated and more costly. As Schoen explains, delays caused by funding restrictions increased both physical risk and emotional strain for patients.[57] Clinics in poorer regions, especially in the south and midwest, struggled to stay open without Medicaid reimbursement, which left many areas without any providers at all.[58] The combination of travelling long distances and making arrangements to pause their lives for the time being was much harder for lower income women than it would have been for wealthier women. The cost of abortion became a structural burden, one created by the conditions of poverty. For many women, these obstacles made abortion inaccessible, even if they technically had the legal right to obtain one.

By upholding Hyde, the Supreme Court effectively established this two-tiered system, with the Court confirming that constitutional rights did not guarantee the means to exercise them. Reproductive autonomy was made dependent on individual financial circumstances and the state level political culture. The legal battles following Hyde clarified this, and made it clear that the fight over abortion would be decided by who could afford it.

By the 1980s, the Hyde Amendment had become more than a funding restriction. It became a symbol. Beginning in 1976 as a policy decision buried in the federal budget, it grew into one of the defining features of the conservative movement. Hyde showed how questions about family, morality, and religion could be folded into debates about government spending, which linked fiscal and moral conservatism.[59]

Before the late 1970s, abortion had not been clearly split along party lines. There were liberal Republicans who supported Roe v Wade, and conservative Democrats who opposed abortion. But this political landscape changed dramatically as the New Right emerged. Evangelical leaders like Jerry Falwell and Paul Weyrich mobilized conservative Christians around issues such as school desegregation, the Equal Rights Amendment, and sex education[60]. Abortion became the unifying issue they needed, which was a morally charged topic that could bind fiscal conservatives, religious traditionalists, and states’ rights advocates. The political backlash against Roe occurred at the same time that the evangelical Christians were becoming more politically organized.[61] Hyde provided a concrete policy issue around which these groups could mobilize, and helped them forge a new partisan identity. These debates that began over funding became part of a larger cultural conflict about the meaning of family, sexuality, and arguably, national values. The rhetoric that surrounded the Hyde Amendment reflected this shift, because instead of discussing abortion primarily in terms of women’s autonomy or health, conservatives increasingly framed the debate around the fetus. Sara Dubow argues that by the 1980s, the fetus had come to symbolize “a national innocence and moral purity,” a life seen as separate from the woman and one deserving of state protection.[62] This transformation was crucial because it allowed abortion opponents to present themselves as protecting vulnerable life instead of restricting women’s rights.

President Ronald Reagan played a major role in pushing this narrative. Although he had signed an abortion reform law when he was governor of California, by the time of his presidency in 1980, he had fully embraced the anti-abortion cause. In his 1983 essay “Abortion and the Conscience of the Nation,” he argued, “We cannot diminish the value of one category of human life, the unborn, without diminishing the value of all human life.”[63] With this statement, Reagan tied abortion to a broader moral crisis, suggesting that perhaps the nation’s character and spirituality were at stake. This argument resonated strongly with any evangelicals who had helped usher him into office, as he frequently spoke about the United States as a nation in need of moral renewal. His rhetoric helped solidify abortion as a moral anchor in the conservative identity, and made support for Hyde a test for Republican lawmakers.[64] In this environment, opposing the Hyde Amendment became politically risky, as it could be interpreted as rejecting the moral vision that Reagan had tied so closely to national identity.

Meanwhile, the Hyde Amendment’s budget framing allowed conservatives to present the issue in the language of limited government rather than explicitly presenting it as moral regulation. The idea that taxpayers should not be forced to support abortion with public funds gained traction among people who might not have outright embraced the anti-abortion movement. As Maris Vinovskis explains, Hyde represented a new style of policy making in which moral goals were pursued through fiscal restrictions rather than constitutional bans.[65] It was a quieter and more durable form of regulation.

Blending moral politics and fiscal conservatism also helped solidify the broader culture wars of the 1980s and 90s. Issues like school prayers, sex education, gay rights, and welfare reform became linked together as defending “traditional values.”[66] The Hyde Amendment fit neatly into this framework, allowing conservatives to argue that they were simultaneously protecting unborn life and protecting taxpayers from government overreach.[67] They saw abortion as both a moral failure and a misuse of public funds. However, this shift also made it increasingly difficult for Democrats to maintain a unified position on abortion. While most Democratic lawmakers supported the legal right to abortion, many were hesitant to outright oppose the Hyde Amendment, avoiding the risk of being labeled as anti-religion.[68] As a result, the amendment was repeatedly renewed with bipartisan support. A newspaper article from 1993 discussed the twenty years post-Roe, stating that Hyde displayed a “masterful understanding of the rules, procedures, and time constraints of the House,” as he “rounded up 254 of his colleagues (including 98 Democrats) to sustain [his amendment] and prohibit federal funding to pay for abortions for poor women[69].” The article clearly showed that Hyde’s durability did not only rest on conservatives but on a bipartisan reluctance to challenge Hyde as it was framed as fiscally responsible and morally protective.

By the 1990s, the logic behind Hyde had become ingrained in national political identity. The idea that abortion was something the government should not fund became widely accepted. This masked the fact that Hyde had made abortion a class dependent right, one available to those who could afford it and inaccessible to those who could not.[70] It played a key role in shaping these culture wars, by turning the reproductive choices of women into questions of morality and national identity, instead of questions of justice and autonomy.

The widening inequalities created by the Hyde Amendment did more than restrict access, as they exposed the limits of the existing pro-choice framework and set the stage for a new kind of activism. The measures taken by states may have seemed procedural, but combined with the lack of funding, they created this maze of barriers for low income women. Before the inequalities created by Hyde pushed activism in new directions, the reproductive rights movement of the 1970s was dominated by second wave feminist organizations such as NOW and NARAL[71]. These groups framed abortion primarily through the language of privacy and individual choice, relying heavily on Roe’s constitutional logic[72].Yet this framework was limited. It often centered around middle class white women and assumed that once legal barriers were removed, access would naturally follow[73]. Poor women, women of color, and immigrant women repeatedly testified that legality meant little without affordable care, transportation, or childcare[74]. Their experiences highlighted structural inequalities that mainstream pro choice rhetoric did not address. By the late 1980s and 1990s, many reproductive rights organizations began referring to the United States as having two systems of abortion access. In wealthier states, where medicaid or state funds covered abortion, access remained relatively stable. However, in other states, abortion access had become severely limited. The concept of “choice,” which had been the foundation of pro-choice activism, no longer fit the reality. Abortion had shifted from a universal constitutional right to a right that had to be purchased. The Hyde Amendment redrew the map of reproductive freedom, determining where and to whom abortion was available.

While the Hyde Amendment strengthened the conservative movement and reshaped how abortion was discussed in national politics, it also pushed reproductive rights activism in a new and beautiful direction. In the 70s, many mainstream feminist organizations had framed abortion mainly as a matter of individual choice, drawing directly from the privacy language of Roe v. Wade.[75] The assumption was that if abortion was legal, women would be able to access it. But Hyde made it clear that legality and access were not the same thing, and that the concept of “choice” was far less meaningful for women who could not afford the procedure in the first place. At first, mainstream pro-choice organizations struggled to respond. Groups like the National Association for the Repeal of Abortion Laws (now known as Reproductive Freedom for All) and NOW (the National Organization for Women) continued to fight Hyde through legislative appeals and court challenges, and focused on restoring Medicaid coverage.[76] However, these strategies were slow and had little success. Contemporary reports show how quickly grassroots feminist activism responded to Hyde. A 1979 Delaware County Daily Times article described more than forty NOW members and NARAL activists picketing a congressional dinner attended by Henry Hyde[77]. Protesters carried signs reading “Poor people don’t have a choice about my body,” and NOW’s Delaware County president Debbie Rubin told reporters that the Hyde Amendment “eliminates all abortions for poor women except when the life of the mother is in danger[78].” She warned that measures like Hyde did not stop abortion but instead “force a return to back-alley and self-inflicted abortions[79].” Meanwhile, women who were directly affected by Hyde were left to find practical ways to access the care they needed. This led to the early development of abortion funds, which were community based efforts in which volunteers raised money to help low income women pay for their abortions.[80] These funds showed that access could be supported by mutual aid and grassroots networks.

The deeper and more transformative opposition to Hyde came from activists who were already organizing around healthcare inequality, racism, and economic justice. The focus was on the fact that the same systems that restricted abortion access also failed to provide basic healthcare, childcare, housing, and social support.[81] For many women of color, the issue was not only the right to end a pregnancy, but also the right to raise children safely and with dignity[82]. This perspective was rooted in a longer history, as poor women and women of color had often faced contradictory and coercive forms of reproductive control, being denied contraception and abortion.[83] The Hyde Amendment did not create this dynamic, though it did extend it into the post-Roe era by making abortion services unattainable to those without financial resources. Linda Gordon notes that decisions about public funding have long reflected judgments about which women should bear children and which should not, and Hyde reinforces exactly this kind of hierarchy.[84]

By the early 1990s, these critiques began to merge into a new framework known as Reproductive Justice. This term was coined by a group of Black women activists in 1994 who argued that the mainstream pro-choice movement was focusing too narrowly on the legal right to abortion, ignoring the economic and social barriers that shaped many women’s decisions when it came to having an abortion.[85] They insisted that reproductive freedom was not only about ending a pregnancy, but was also about having the conditions necessary to make and sustain meaningful choices in the first place.[86] Reproductive autonomy clearly required more than just legal permission to have an abortion. Access to healthcare, living wages, and safe housing are only a few resources that help in the fight for reproductive autonomy[87]. Organizations like SisterSong, founded in 1997, helped establish reproductive justice as a national movement[88]. It brought together Black, Latina, Indigenous, and Asian American women to argue that reproductive rights should be understood as human rights, grounded more in equality than just privacy.[89] Their work highlighted that access to abortion, childcare, healthcare, and racial and economic justice were all deeply connected. The activism that emerged in response to the Hyde Amendment did not simply resist the policy, but it reframed the entire conversation about reproductive rights and freedoms. “Choice” was an incomplete framework, usually centered on the experiences of white middle class women and overlooking the realities of those with less resources.[90]

Nearly fifty years after its passage, the Hyde Amendment continues to shape reproductive access in the United States. It did not overturn Roe v Wade, and it did not need to. By restricting Medicaid funding, Hyde redefined abortion as something that had to be purchased personally, even though it had been framed as a constitutional right. It set a precedent for how lawmakers could limit rights indirectly, though economic policy rather than outright prohibition. The Supreme Court’s decision in Maher v Roe and Harris v McRae reinforced the shift by drawing a line between the right to choose and the ability for women to exercise that right. The court insisted that poverty was a private circumstance, not something that the state was obligated to help with. This stance made economic equality seem legally neutral, even as it was falling the hardest on poor women and women of color.

The result was a stratified system in which abortion remained legal but unevenly available. Access varied dramatically by state, income level, and race, and the disparities only grew through time as clinics closed and new restrictions were passed. Lawmakers began to justify restrictions as defense of life rather than limitation on women. Additionally, the activism that emerged from groups like the National Black Women’s Health Project and SisterSong reframed abortion access as a part of a broader struggle of reproductive justice, insisting that reproductive freedom means not only the right to end a pregnancy, but also the right to raise children in safe and secure environments. This exposed what Hyde had been showing all along, that rights are only meaningful when people have the resources to act on them.

On the one hand, the Hyde Amendment demonstrated how effectively lawmakers can use economic constraints to reshape constitutional rights without actually touching their legality. This persisted for decades, influencing battles over contraception access, parental consent laws, and clinic closures. On the other hand, Hyde also helped produce a more expansive movement for reproductive freedom, one that recognized the limits of legal victories without material support. The lesson learned from Hyde is that a right that cannot be accessed is not truly a right. The law might claim neutrality in withholding federal funds, but the consequences of that “neutrality” are deeply unequal. The Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization in 2022 completed what Hyde set in motion. By allowing states to ban abortion outright, Dobbs transformed the unequal access made byHyde into legal prohibition. The patterns of racial, geographic, and economic inequality exposed by Hyde now define the post-Dobbs landscape, showing that the struggle for reproductive freedom has always been connected to the struggle for equality.

Understanding the Hyde Amendment can also help social studies teachers think about how to teach topics like constitutional rights, inequality, and the ways legal decisions affect people’s everyday lives. For high school students, it can be difficult to understand how a right can exist on paper but still be unreachable in practice. The Hyde Amendment offers a clear example of this. Looking at cases like Maher v. Roe and Harris v. McRae helps students see how the Supreme Court can acknowledge a constitutional right while also allowing policies that make that right not accessible to certain groups. This gives teachers a concrete way to help students think about the difference between what the law says and how people actually experience it, which is an important part of civic learning.

This topic is also useful for teaching about political realignment and the culture wars of the late twentieth century. Abortion was not always a purely partisan issue, and Hyde helps show students how moral, religious, and economic arguments came together to reshape politics on a national level. When teachers use primary sources like congressional testimonies, protest coverage, and presidential speeches, students can trace how different groups framed abortion and funding restrictions, and how these debates shaped the identity of the New Right. This not only builds students’ analytical skills but also shows them how public policy becomes a cultural symbol, not just a legal decision. Hyde also creates an opportunity to introduce the concept of reproductive justice, especially when teaching about movements led by women of color. Many high school students have never considered how race, class, and geography influence who can actually exercise their rights. Discussing how organizations like the National Black Women’s Health Project and later SisterSong responded to Hyde helps students see how activism grows in response to inequality. Teachers never need to take a political stance to guide students through these conversations. Instead, they can highlight how different communities understood the consequences of Hyde and why some activists argued that “choice” alone was not enough.

All in all, the Hyde Amendment is a strong example for teaching disciplinary literacy in social studies. It encourages students to read court cases closely, compare historical interpretations, analyze political speeches, and connect policy decisions to real human outcomes. Using Hyde in the classroom shows students that history is not just about memorizing events, but can also be about understanding how power operates and how policies can reshape people’s lives.

Cofiell, Trisha. “Women Protest at Hyde Dinner.” Delaware County Daily Times (Chester, PA), September 14, 1979. Newspapers.com. https://newspaperarchive.com/delaware-county-daily-times-sep-14-1979-p-1/

Daley, Steve. “Hyde Remains Constant.” Franklin News-Herald (Franklin, PA), July 14, 1993. NewspaperArchive. https://newspaperarchive.com/franklin-news-herald-jul-14-1993-p-4/.

Dubow, Sara. Ourselves Unborn : A History of the Fetus in Modern America. Oxford: Oxford University Press, 2011. https://research.ebsco.com/linkprocessor/plink?id=a4babef6-641b-3719-a368-8aa5e93e8575.

Goodwin, Michele. “Fetal Protection Laws: Moral Panic and the New Constitutional Battlefront.” California Law Review102, no. 4 (2014): 781–875. http://www.jstor.org/stable/23784354.

Gordon, Linda. The Moral Property of Women : A History of Birth Control Politics in America. 3rd ed. Urbana: University of Illinois Press, 2002. https://research.ebsco.com/linkprocessor/plink?id=ea0e3984-56df-3fca-adb6-3fc070515698.

Gunty, Susan. “THE HYDE AMENDMENT AND MEDICAID ABORTIONS.” The Forum (Section of Insurance, Negligence and Compensation Law, American Bar Association) 16, no. 4 (1981): 825–40. http://www.jstor.org/stable/25762558.

Harris v. McRae, 448 U.S. 297 (1980). https://supreme.justia.com/cases/federal/us/448/297/.

Maher v. Roe, 432 U.S. 464 (1977). https://supreme.justia.com/cases/federal/us/432/464/

Neurauter, Juliann R. “Pro-lifers Favor Hyde Amendment.” News Journal (Chicago, IL), December 7, 1977. NewspaperArchive. https://newspaperarchive.com/news-journal-dec-07-1977-p-19/

Olson, Courtney. “Finding a Right to Abortion Coverage: The PPACA, Intersectionality, and Positive Rights.” Seattle University Law Review 41 (2018): 655–690.

Perry, Rachel. “Abortion Ruling to Hit Hard Locally.” Eureka Times-Standard (Eureka, CA), August 27, 1980. NewspaperArchive. https://newspaperarchive.com/eureka-times-standard-aug-27-1980-p-9/

Petchesky, Rosalind Pollack. “Antiabortion, Antifeminism, and the Rise of the New Right.” Feminist Studies 7, no. 2 (1981): 206–46. https://doi.org/10.2307/3177522.

Reagan, Leslie J. “‘About to Meet Her Maker’: Women, Doctors, Dying Declarations, and the State’s Investigation of Abortion, Chicago, 1867-1940.” The Journal of American History 77, no. 4 (1991): 1240–64. https://doi.org/10.2307/2078261.

Reagan, Ronald. “Abortion and the Conscience of the Nation Abortion and the Conscience of the Nation.” The Catholic Lawyer the Catholic Lawyer Volume 30, no. 2 (1986). https://scholarship.law.stjohns.edu/cgi/viewcontent.cgi?article=2212&context=tcl.

Ross, Loretta. “Understanding Reproductive Justice: Transforming the Pro-Choice Movement.” Off Our Backs 36, no. 4 (2006): 14–19. http://www.jstor.org/stable/20838711.

Schoen, Johanna. Choice and Coercion : Birth Control, Sterilization, and Abortion in Public Health and Welfare. Chapel Hill: The University of North Carolina Press, 2005. https://research.ebsco.com/linkprocessor/plink?id=f8bc89c3-f4c2-36da-ba5d-809e9b26a981.

Solinger, Rickie. “‘Wake up Little Susie’: Single Pregnancy and Race in the ‘Pre-Roe v. Wade’ Era.” NWSA Journal 2, no. 4 (1990): 682–83. http://www.jstor.org/stable/4316090.

United States. Congress. House. Committee on Appropriations. Federal Funding of Abortions, 1977–1979. Washington, D.C.: U.S. Government Printing Office, 1979. Gerald R. Ford Presidential Library. https://www.fordlibrarymuseum.gov/library/document/0048/004800738repro.pdf

Vinovskis, Maris A. “The Politics of Abortion in the House of Representatives in 1976.” Michigan Law Review 77, no. 7 (1979): 1790–1827. https://doi.org/10.2307/1288043.


[1] Reagan, Leslie J. “‘About to Meet Her Maker’: Women, Doctors, Dying Declarations, and the State’s Investigation of Abortion, Chicago, 1867-1940.” The Journal of American History 77, no. 4 (1991): 1240–64. https://doi.org/10.2307/2078261.

[2] Reagan 1245

[3] Solinger, Rickie. “‘Wake up Little Susie’: Single Pregnancy and Race in the ‘Pre-Roe v. Wade’ Era.” NWSA Journal 2, no. 4 (1990): 682–83. http://www.jstor.org/stable/4316090.

[4] Gunty, Susan. “THE HYDE AMENDMENT AND MEDICAID ABORTIONS.” The Forum (Section of Insurance, Negligence and Compensation Law, American Bar Association) 16, no. 4 (1981): 825. http://www.jstor.org/stable/25762558.

[5] Vinovskis, Maris A. “The Politics of Abortion in the House of Representatives in 1976.” Michigan Law Review 77, no. 7 (1979). https://doi.org/10.2307/1288043.

[6] Vinovskis 1801

[7] Harris v. McRae, 448 U.S. 297 (1980) https://supreme.justia.com/cases/federal/us/448/297/

[8] Dubow, Sara. Ourselves Unborn : A History of the Fetus in Modern America. Oxford: Oxford University Press, 2011. https://research.ebsco.com/linkprocessor/plink?id=a4babef6-641b-3719-a368-8aa5e93e8575.

[9] Gordon, Linda. The Moral Property of Women : A History of Birth Control Politics in America. 3rd ed. Urbana: University of Illinois Press, 2002. https://research.ebsco.com/linkprocessor/plink?id=ea0e3984-56df-3fca-adb6-3fc070515698.

[10] Gordon 29-34

[11]  Schoen, Johanna. Choice and Coercion : Birth Control, Sterilization, and Abortion in Public Health and Welfare. Chapel Hill: The University of North Carolina Press, 2005. https://research.ebsco.com/linkprocessor/plink?id=f8bc89c3-f4c2-36da-ba5d-809e9b26a981.

[12] Goodwin, Michele. “Fetal Protection Laws: Moral Panic and the New Constitutional Battlefront.” California Law Review102, no. 4 (2014): 781–875. http://www.jstor.org/stable/23784354.

[13] Vinovskis 1793-1796

[14] Dubow 147-155

[15] Petchesky, Rosalind Pollack. “Antiabortion, Antifeminism, and the Rise of the New Right.” Feminist Studies 7, no. 2 (1981): 206–46. https://doi.org/10.2307/3177522.

[16] Ross, Loretta. “Understanding Reproductive Justice: Transforming the Pro-Choice Movement.” Off Our Backs 36, no. 4 (2006): 14–19. http://www.jstor.org/stable/20838711.

[17] Vinovskis 1818

[18] Vinovskis 1794

[19] Gunty 837

[20] Vinovskis 1812

[21] Vinovskis 1801

[22] Gunty 835

[23] Petchesky 120

[24] Schoen, Johanna. Choice and Coercion : Birth Control, Sterilization, and Abortion in Public Health and Welfare. Chapel Hill: The University of North Carolina Press, 2005. https://research.ebsco.com/linkprocessor/plink?id=f8bc89c3-f4c2-36da-ba5d-809e9b26a981.

[25] Olson, Courtney. “Finding a Right to Abortion Coverage: The PPACA, Intersectionality, and Positive Rights.” Seattle University Law Review 41 (2018): 655

[26] Gunty 831

[27] Vinovskis 1811

[28] United States. Congress. House. Committee on Appropriations. Federal Funding of Abortions, 1977–1979. Washington, D.C.: U.S. Government Printing Office, 1979. Gerald R. Ford Presidential Library. https://www.fordlibrarymuseum.gov/library/document/0048/004800738repro.pdf

[29] Neurauter, Juliann R. “Pro-lifers Favor Hyde Amendment.” News Journal (Chicago, IL), December 7, 1977. NewspaperArchive.

[30] Schoen 3-11

[31] Vinovskis 1793

[32] Gunty 826

[33] Gunty 825

[34] Gunty 825

[35] Schoen 5

[36] Schoen 5

[37] Gunty 834

[38] Gunty 836

[39] Maher v. Roe, 432 U.S. 464 (1977). https://supreme.justia.com/cases/federal/us/432/464/

[40] Maher v. Roe

[41] Maher v. Roe

[42] Harris v. McRae

[43] Harris v. McRae

[44] Harris v. McRae

[45] Perry, Rachel. “Abortion Ruling to Hit Hard Locally.” Eureka Times-Standard (Eureka, CA), August 27, 1980. NewspaperArchive.

[46] Goodwin, Michele. “Fetal Protection Laws: Moral Panic and the New Constitutional Battlefront.” California Law Review102, no. 4 (2014): 781–875. http://www.jstor.org/stable/23784354.

[47] Gunty 834

[48] Harris v. McRae

[49] Schoen 147

[50] Gordon 340

[51] Gunty 828

[52] Schoen 225

[53] Schoen 140

[54] Schoen 24

[55] Schoen 5

[56] Schoen 5

[57] Schoen 149

[58] Schoen 32

[59] Vinovskis 1818

[60] Petchesky 216-17

[61] Dubow 162

[62] Dubow 7

[63] Reagan, Ronald. “Abortion and the Conscience of the Nation Abortion and the Conscience of the Nation.” The Catholic Lawyer the Catholic Lawyer Volume 30, no. 2 (1986). https://scholarship.law.stjohns.edu/cgi/viewcontent.cgi?article=2212&context=tcl.

[64] Dubow 154

[65] Vinovskis 1801

[66] Dubow 165

[67] Vinovskis 1795

[68] Vinovskis 1809

[69] Daley, Steve. “Hyde Remains Constant.” Franklin News-Herald (Franklin, PA), July 14, 1993. NewspaperArchive. https://newspaperarchive.com/franklin-news-herald-jul-14-1993-p-4/.

[70] Schoen 5

[71] Gordon 311-323

[72] Gordon 316

[73] Schoen 52

[74] Gunty 827-829

[75] Schoen 74

[76] Dubow 159

[77] Trisha Cofiell, “Women Protest at Hyde Dinner,” Delaware County Daily Times (Chester, PA), September 14, 1979, 1, Newspapers.com.

[78] Cofiell 1

[79] Cofiell 1

[80] Schoen 11

[81] Goodwin, Michele 818

[82] Ross, Loretta. “Understanding Reproductive Justice: Transforming the Pro-Choice Movement.” Off Our Backs 36, no. 4 (2006): 14–19. http://www.jstor.org/stable/20838711.

[83] Schoen 6

[84] Gordon 339

[85] Ross 14-15

[86] Goodwin 785

[87] Ross 14-16

[88] Ross 17

[89] Goodwin 857

[90] Gordon 339

Beyond the Box Score

October 15th, 1923. John McGraw’s New York Giants versus Miller Huggins New York Yankees in game six of the World Series. At the beginning of the Yankees season, The House That Ruth Built was opened to the public in April of that year. Babe Ruth opened the stadium and set the tone for that season by hitting three home runs along with eight walks. That tone stayed up until the day at the Polo Grounds stadium in Upper Manhattan where McGraw’s dream of three straight championships in a row was crushed. Allowing the New York Yankees to win their very first World Series championship.

The Yankees winning the World Series was the very first article on the front page of this New York Times article which claims that this game six was very intense and had many back-and-forth moments between the Giants and the Yankees throughout. Both teams also have at least one key player that had a large impact on the game, for the Yankees, Babe Ruth of course, and for the Giants, it was their pitcher Art Nehf. As the author of this article calls him, “the last hope of the old guard,”[1] had only allowed two hits in the first seven frames and allowed one home run from Ruth. Nehf had been too powerful against the Yankee hitters with his great speed and side-breaking curve made it from the third inning to the eighth the Yankees went hitless. While also being three runs behind and the Yanks getting no love from the crowd in the Giant’s home stadium, the situation was looking grim for Huggins and his team.

When the eighth inning hit, things still seemed to be looking good for Nehf, but during the second pitch of the inning is when the tide started to turn. The ball flew close to Walter Schang’s ear, he tried to move and ended up hitting the ball to third base for a single. After this hit, two more Yankee players hit and were able to get Schang home to only put them two behind the Giants. According to the article, “Nehf’s face turned as white as a sheet,”2 something happened to him after the few hits he gave away and he couldn’t continue. Bill Ryan, the backup pitcher, came in to try and salvage what we could from the wreckage that Nehf left. Ryan started pretty well and almost made it out of the inning until Bob Meusel hit the ball slightly to the right of Ryan into center field. Three runs were scored on that hit, five for the inning making the World Series almost over at that point.

With that eighth-inning rally, the Yankees were able to put the game away and win their very first World Series Championship. 

The next big article that is on the front page of this New York Times article is that hungry mobs raid Berlin bakeries. At this time in 1923, five years ago, Germany had just lost World War One and was facing some pretty terrible consequences from the Allied powers. One of these consequences was that Germany was not doing a good job paying their war reparations to the French and therefore decided to occupy the Ruhr district. This area was known for having many raw materials that the French would take for themselves as payment for German war debts. In these articles in the New York Times, it is fascinating to see the differences in rioting in German cities like Berlin and Frankfurt versus French-occupied ones such as Neustadt and Düsseldorf.  The first half of this article talks about Berlin and Frankfurt which were two cities that were still controlled by the German government but were wrecked by inflation of bread prices. This was because the government decided to print more money to have enough for their war debts. The problem with printing more money is that it creates more physical currency, but decreases its value. After the government did this, the value of the German mark went to almost no value, and prices of bread skyrocketed. This article says that “5000 demonstrators, mostly unemployed men, reinforced by women with market baskets… marching to the Rathaus and making demands upon the authorities… The police reserves were called and drove demonstrators away.”[2] Inflation wrecked the economy so badly that the German people were unable to afford for their families and protested in the capital city to show the disarray of the German state. 

The second half of the article talks about the cities of Neustadt and Düsseldorf, two cities that were occupying the territory as stipulation states in the Treaty of Versailles. In Neustadt, crowds of unemployed people were attempting to raid a post office that was reported to be holding currency inside of it. French authorities were sent out to break up the crowd. In Düsseldorf, communists and nationalists were working together to foment trouble in the Ruhr district. In the article, the author states one key difference between the riots in this city compared to Berlin. “According to a statement made this morning the movement is political rather than economic. It was aimed against Chancellor Stresemann (German foreign minister) on the one hand and against the French on the other.”4 These people were not rioting because they didn’t have enough food, these people hated the fact that they were being ruled by a foreign power. I found this section of the newspaper very interesting because knowing what happened later on with Hitler coming to power, the German people despised the Treaty of Versailles and were willing to shift political extremes to get rid of it.

There are sections in this article commenting on the rising poor conditions of the German government during the 1920s. This article is from the perspective of Reed Smoot, Chairman of the Senate Finance Committee. Called at the White House to tell the president about the conclusions reached after his recent trip to Europe. 

            After his trip, the senator had some definite opinions on the Americans revisiting the appointment of the Hughes proposal to determine the ability of Germany to pay their reparations from the war. This plan was an idea that the International Commission should fix the amount of money that the Germans would have to pay back. Smoot wanted all countries in the commission to agree on this plan and was expecting the French to back down on their reparation demands. To be fair, most of World War I was fought on French territory in the northern regions of the country needing these reparations fo rebuilding. 

            The Senator knows that France will most likely not agree with this arrangement but is scared about the future of Europe. He said to the president, “Unless something was done quickly, there was danger of an outbreak which might involve all of Europe.”[3] Too bad that Smoot was right about this and nothing was done with this issue. It is the very reason that the Allies did not relax reparations and kept demanding from a destroyed Germany that Hitler was able to become Chancellor a decade later.

The next big headline of this New York Times newspaper article comes to the news in the United States. This headline was about a conference of drys calling for President Calvin Coolidge to take action against the people who were breaking rules on the prohibition. The counsel of the drys or people who were against liquor consumption saw the amount of people who were smuggling illegal booze by sea and wanted them to stop doing this. They wanted the president and the American people to uphold the Eighteenth Amendment.              Smuggling liquor by sea was one of many alternatives that citizens were finding to get around Prohibition in the 1920s. Rum Row was the name of a naval liquor market along the East Coast that was just beyond the American maritime limit where transactions of alcohol were made. Bootleggers, or people who engaged in the illegal sale of alcohol, would just have to sail out to this region in a small boat to pick up shipments of liquor to resell back in the States.  The last small section of this article is direct quotes from the president calling for legislators to abide by the laws and punish people who are breaking the laws of the Constitution.

He says, “The State or Federal Constitution should resign his office and give place to one who will neither violate his oath nor betray the confidence of the people.”[4] Some corrupt politicians were becoming bootleggers themselves or were not punishing people who were breaking the law, which is why the president had to make this statement to these legislators. Coolidge ends his statement by saying, “Lawmakers should not be lawbreakers.”7

There is another section farther in the New York Times article that is from the perspective of another Representative traveling to another country to report on the country they are traveling to. In this case, it is Fred A. Britten of Illinois returning from his visit to Russia having changed his mind on the recognition of the Soviet government. Much like Reed Smoots, Britten called upon the president to give his reports and experience after being in the new Soviet Union for some time.

            Unsurprisingly, the representative started his report to the president by saying, “The Soviet regime was a visionary Government whose very foundation is baked on murder, anarchy, Bolshevism and theft.”[5] Knowing when this article was written and being three years past the first Red Scare in the United States, one could only imagine his thoughts on the regime in Russia. Many states in the US around the early 1920s were outlawing advocacy of violence in attempting to secure social changes and most people suspected of being communist or left-wing were jailed. Another thing to mention is that this first Red Scare did not distinguish between Communism, Socialism, Social Democracy, or anarchism and all were deemed as a threat against the nation. 

            Britten mentions that he “traveled unofficially, sought no favors, and tried to see the good side of that tremendous political theory which is now holding 150,000,000 people in subjection.”[6] It is debatable whether he was trying to see the good side of Russia or not. He also talks about the major difference in how religion is treated in Russia. Atheism is what was primarily taught in the Soviet Union because religion was seen as a bourgeois institution whose only goal was to make money off of followers. Britten mentions some signs that he saw, one by the entrance to the Kremlin Palace that read, “Religion is the opium of the State,”[7] and another one that said, “Religion is the  tool of the rich to oppress the poor.”11 Communism is very different from capitalism which is why two different Red Scares happened in the United States to protect itself from an ideology that was very different from its own. 

            The prompt for this paper was to find a significant baseball box score from the 1900s of our choosing. I selected the Yankees’ first-ever World Series win against the New York Giants, using the Historic New York Times Database. We were then instructed to examine the other articles published in that same newspaper issue. For example, I focused on reports of hunger strikes in Berlin, which were driven by the collapse of the German mark and soaring bread prices after World War I. This was the first major assignment of the class, designed to help us begin developing primary source research and analysis skills, an essential foundation for any history course.

Teachers don’t have to limit this to a baseball history lesson; it can easily be adapted to focus on any major topic in U.S. history from the 1900s and beyond. Students can begin with a key event as the entry point for their primary source research. Then, they can expand their analysis by identifying and writing about other events covered in the same newspaper issue, painting a fuller picture of what was happening in the U.S. during the chosen time period. This strategy not only sharpens students’ analytical skills but also broadens their understanding of how historical events overlap and influence one another, helping them grasp the interconnectedness of social, political, and cultural developments within a given era.

“Britten Opposes Soviet Recognition.” New York Times (1923-), Oct 16, 1923: Page 5  https://login.tcnj.idm.oclc.org/login?url=https://www.proquest.com/historical-newspapers /yanks-win-title-6-4-victory-ends-1-063-815-series/docview/103153313/se-2.  

“Conference of Drys Calls on Coolidge For Drastic Action.” New York Times (1923-), Oct 16, 1923: Page 1.

“Hungry Mobs Raid Berlin Bakeries.” New York Times (1923-), Oct 16, 1923: Page 1.

Oversimplified. “Prohibition – OverSimplified.” YouTube video, December 15th, 2020. 

“Smoot and Burton See Peril In Europe.” New York Times (1923-), Oct 16, 1923: Page 3.

“Yanks Win Title; 6-4 Victory Ends $1,063,815 Series.” New York Times (1923-), Oct 16, 1923:  Page 1.


[1] “Yanks Win Title; 6-4 Victory Ends $1,063,815 Series,” New York Times (1923): 1. 2 “Yanks Win Title,” 1.

[2] “Hungry Mobs Raid Berlin Bakeries,” New York Times (1923): 1. 4 “Hungry Mobs Raid,” 1.

[3] “Smoot and Burton See Peril In Europe.” New York Times (1923): 3.

[4] “Conference of Drys Calls on Coolidge For Drastic Action,” New York Times (1923): 1. 7 “Conference of Drys,” 1.

[5] “Britten Opposes Soviet Recognition,” New York Times (1923): 5.

[6] “Britten Opposes Soviet,” 5.

Third Gender Identities in South Asia and their Cultural Significance in Modern History

 “Hijras are often seen doing mangti (begging) at busy intersections. The chelas knock on car windows to ask for money in exchange for their blessing… They fear that if they don’t give money, we might curse them with bad luck. We beg to feed ourselves. Even if they don’t want to, they’ll still give money. They’re scared they’ll be reborn as hijras in their next life, or they’ll lose a loved one, or have bad business… During mangti, I’ve been beaten many times. Some people have ripped my clothes. Nobody sees what’s good about us. People see us from a negative perspective. Some people even slap us, or will just tell us to go away. The police never help us. They discriminate against us or they pressure us into having sex with them.”[1]

The above passage is a testimony provided by a hijra to a Western news outlet, describing her experiences during mangti, a transaction that asks civilians to pay for services such as blessings. Despite this being a fair transaction, average civilians hold a hostile view towards Hijra who are asking for payment, yet many still pay for the service. This is one of the few ways that hijra make money in the present day; it resonates with their traditional practices at childbirths, weddings, and other communal events to support and provide blessings to their communities. The negative perception of Hijra by the public can be seen through the police brutality and violence that many of these individuals go through in the present. However, the story of the Hijra did not start in the 21st century and third gender individuals have had a presence in South Asia for centuries.

The Hijra are a modern group of individuals that live in South Asia, identifying as a third gender that is legally recognized in the 21st century, these individuals are born male but do not identify with their sex. Many who identify with the term also consider themselves as “Demigods” and beyond the identity of an ordinary individual. Their history and community tell the narrative of resilience against social and cultural oppression while striving to be understood as human beings, just like many other transgender or indigenous third identities that exist throughout the world. For the Hijra, the survival of their culture comes often from small communities or gharana that are set up with a Guru, the teacher, as well as the Nayak that is the gharana’s figurehead, living as a family with one another. However, the gharana have not been empowered enough to combat the social systems that keep the Hijra oppressed or marginalized. The fear from the “common” citizens of the Hijra creates a continued circle of refusal to accept the people. In many spaces where Hijra make their livelihood, there is public resistance to their presence and a physical harm that is often pushed onto these individuals. This fear has roots in colonial policy that still impacts the daily lives of Hijra because of the dismissal in understanding the relationship of gender within pre-colonial culture and Hinduism.

The Hijra are also significantly connected to the historical term of the Khwajasarai, third gender individuals who had a role in the Mughal Court, the center of governmental powers, during the rule of the Mughal Empire in South Asia. While the Khwajasarai were individuals with social status and political power, it was ultimately the British East India Company’s policies that enforced Western standards of gender that removed them from their societal role. Much like the Guru or Nayak of a gharana, the Khwajasarai served as holders of “knowledge traditions of teacher-disciple lineages, practices of kinship-making and elite codes of masculinity”.[2] In early attempts to police the Khwajasarai, British officials used religious laws to assert themselves into the territories power structures, but in doing so, they indirectly invalidated the conceptions of sexual practices and kinship that the Khwajasarai held power in.[3] Ultimately, the British were able to assert the control that they wanted over the region using British standards of gender and instituted policing and regulations on homosexual activities, Khwajasarai and other non-Western standards of presentation and identity. The history of discrimination towards individuals who stretch beyond the binary lens of gender and sexuality is drastic in South Asia; it is still ongoing in ways that legal policy of reparations cannot disrupt. This paper will argue that the history of discrimination towards the Indigenous gender identities of South Asia run so deep that despite efforts to support the agency of Hijra and other individuals, there is more of a need for a cultural shift in attitude towards the Hijra overall. With this understanding of gender dynamics, many Hijra have understood this as a call to action and at times have alienated other Hijra in rural areas, who are non-Hindu, and belong to a lower social class or caste.

Throughout the West and many other parts of the world, there is a common misconception that third gender identities or transgender people are a recent phenomenon, but it could not be further from the truth. In this paper, the word Hijra, Indigenous gender identities, third-gender identity will be used instead of euro-centric terminology such as trans, transgender, or eunuch, unless a specific individual has associated themselves with it. As Vaibhav Saria, author of Hijras, Lovers, Brothers: Surviving Sex and Poverty in Rural Identity and professor of anthropology, explores the idea that “Hijras, with their long-documented history, are not a local or cultural instantiation of the global category of trans… Hijras were referred to as “eunuchs” in much of colonial discourse and in English language dailies until quite recently”.[4] As time progresses, the West learns to adapt and accept the terminology that best describes and translates the power of the identity of the Hijra, but using words such as eunuch and transgender continue to fail to embrace the diverse group that identifies with the term. From Hindus to Muslims and many more cultural or religious groups within India, the Hijra have a shared and complex history that must not be ignored. When simplifying Hijra to be “transgender”, the West continues to assert that their language and cultural understanding of gender is superior.

Unlike how Western culture has defined gender identity as a sole relation to one’s gender presentation and given strict roles to conform with, the Hijra have a more complex relationship with society, religion, socio-economic status, living situations and more. The West simplifies gender into a binary that has influenced the Global South and other countries that were colonized which creates a struggle for Indigenous people that fall into a “third gender” to be respected even if they have deep roots within their communities. Numerous Indigenous gender identities, such as the Hijra, are defining for themselves how their expression should be perceived. For many Hijra, they understand it to be a culture: “a tradition and a community that has its roots in ancient times” and those who see themselves as transgender understand it to be “more like an identity. We see ourselves as transgender. There is no pressure from the community. We’re free to do what we want. But if we want to be hijras, there are rules restricting our actions.”[5] There are significant repercussions with the association to the Hijras that is not carried through people who align themselves with the trans community, therefore, these groups are different. The Hijra have found a way of life and form gharana to survive which is not a culture that exists within trans communities. The Hijra are ostracized and limited to how much they are able to do, being forced to beg for support in the streets, more subject to being attacked and having no intervention or protection. Because of the harm that is conducted towards the Hijra, it is essential to learn about their practices, community structures, and gain a basic understanding of their livelihood. In effect, the understanding of how individuals identify can assert agency towards a group that is still oppressed within Indian society.

A main critique from the Hijra community about misconceptions is the inability for some to recognize the difference between the Hijra and trans identity. Saria describes “Using the word ‘transgender’ is a way to avoid using the word ‘hijra,’ since the word has been and continues to be used disparagingly by some people; it is a way of respect, as seen in the text of Indian legal and parliamentary documents.”[6] By not using the word and specific identity, many disempower this marginalized community. As individuals in the West, it is essential that usage of language by a specific community is asserted into academic scholarship and common language when discussing issues that affect a certain population. While many legal documents in India fail to establish a difference between the Hijra and transgender people, it is the job of all those who wish to advance Hijra rights to practice asserting agency to this community by using the correct terminology.

While terminology is often misunderstood in transition, it still remains the job of Western audiences to remain vigilant to the Hijra. For the Hijra, they have connected their spiritual existence for thousands of years in relation to Hinduism. Many connect to Ardhanarishvara, a deity that presents both masculine and feminine through god, Shiva, and goddess, Parvati. In Hindu mythology, there is a specific reference towards a third body that does not fit in the binary of female or male that is further supported by the existence of Ardhanarishvara: the symbolic understanding that femininity and masculinity are interconnected forces.[7] In the present day, the Hijra are still highly connected to the spiritual aspect of their identity and understand how others perceive them as people who can both bless and curse because of their connection to Ardhanarishvara.

While the Hijra continue to re-empower themselves in society through the gharana, there are other dynamics that make the community of the Hijra complex and beyond the comprehension of Westerners if only looked through a particular lens. Because the Hijra associate themselves highly with Hinduism, there has been a stretch to rename themselves as the “Kinnar”. Saria addresses that this particular project of renaming the Hijra by using Kinnar is more often found in urban centers where access to privilege is more common. However, many activists believe that “it could possibly be an alibi to absorb hijras within ascendant right-wing Hindu nationalism”.[8] There is a threat of Hindu nationalism which attempts to nationalize Hinduism and justify oppression for individuals who are not Hindu, most notably, Muslims. This directly harms all Hijra as well as non-Hindu Hijra is significant and heavily impacts liberatory practices that can be conducted towards all trans and third gender identities throughout India. The small population that benefits from a close proximity to Hindu nationalism does not make up for the exclusionary practices of other marginalized people within Indian society or contribute to lessening the societal fear of the Hijra.

            Besides South Asia, especially in present day-North India and Pakistan where the Hijra predominantly live, there are numerous indigenous gender identities that are often erased or excluded from the welfare of the present-day government and institutions. While these individuals served as community builders or held positions to help care for children, because of the influence from European colonizers that asserted their two gender binary traditions, many of these communities are shamed. Despite those forms of oppression and marginalization being current to these groups, it is important for a Western audience to understand that many of these individuals, across the world, still hold positions of power in their society. Even with their positions of power, many are often disenfranchised by political institutions and society even though they have existed as identities for hundreds or thousands of years.

            Within present-day Mexico, an Indigenous third gender identity, Muxe, has existed for centuries within the culture of Zapotec people prior to the pre-Columbian era and colonialism. In regions around the Zapotec people, there were many gods that were both women and men explaining the diversity in gender conceptualization.[9] As an identity, these individuals are Mexican Indigenous male-bodied, differently gendered people that do not fit into Western standards of the binary. The Muxe continue to maintain traditions of the Zapotec from the language, dress, and other elements of culture that are no longer practiced but do not serve as religious representations of Meso-American gods or higher powers like the Hijra do in India. Originally, the Muxe worked to preserve culture by completing traditional feminine tasks such as embroidery or craftsmanship and today, they continue that legacy by maintaining community. 

            The Muxe and third gender individuals in South Asia have a parallel history because of colonization. Prior to the arrival of Spanish, French or British gender influences, third gender individuals had a significant role in their communities and gender was not viewed in a binary way where only male or female was acceptable. However, because of this long history of colonization and the establishment of gender binaries into South Asia and present-day Mexico, there is a societal push to exclude and discriminate against individuals who have previously been considered sacred.

The study of third gender identifying individuals across time and cultures has drastically differed depending on the political nature of the time period. The focus on each dynamic of queer or third gender identity ranges depending on new media developed, more civil rights protections being established, and the activism of local communities for recognition. Historians such as Ruby Lal and Emma Kalb tell the story of Mughal Authority and how that impacted third gender individuals. Kalb illustrates how the Khwajasarai were placed on different levels of hierarchy within the Mughal Courts; some had specific access and privileges that were not given to other third gender individuals unless earned. Lal focused more on how different Emperors, such as Akbar, discussed or valued the Khwajasarai and explicitly mentions how they were enslaved individuals, taken from their families at a young age. While some of these individuals were able to achieve high status in the court, they were not able to choose their identity and served the Court by its immediate needs.

Queering India: Same Sex Love and Eroticism in Indian Culture and Society by Ruth Vanita as one of the first major examinations of queer culture in Indian society throughout the last two centuries. The monograph was published in 2002 and specifically, Vanita was inspired to write the book based on discussions raised by the film Fire by Deepa Mehta. The author’s thesis is focused on how colonialists and nationalists focused on and continue to target old traditions and completing the process of “rewriting” the traditions, trying to create uniform traditions and simplify history. The context of this book is powerful as it came to be published soon after the rise of feminist, dalit, queer history and cultural studies in India in the 1990s. Despite being written early in the contributions to Indian Queer Studies and History, Vanita explored the idea of Hijras using Hinduism to explain identity but is unable to connect Hindu Nationalism, which was briefly mentioned in the chapter, to the evolution of Hijra rights.

            While the early discussion of Section 377 in historical research did not focus on the impact the policy had on Hijra, Jessica Hinchy’s research changed the focus towards addressing the restrictions and policing of Hijra during colonial India. Hinchy also furthered research on the Criminal Tribes Act of 1871 which explicitly mentions “eunuchs” which was the term that the British used to describe the people known as Khwajasarais in some regions of India.

            Two of Hinchy’s first major articles were published in 2014 which was the same year as National Legal Services Authority v Union of India (NALSA). The NALSA made the decision that granted Hijras recognition as a third sex as well as the right to choose their gender classification. Additionally, it sought to grant Hijra access to affirmative action policies since it recognized them as a group that was historically discriminated against. The historical context of Hinchy’s articles are relevant because it led to a significant shift in the study of queer culture in India, one that focused solely on same-sex relations to a more holistic view of queerness including people who identify as Hijra or transgender.

Vaibhav Saria studies third gender individuals in South Asia in the present through an anthropological lens. Their work explored how Hijra communities have formed and continue to face different challenges based on their location and economic status. Saria’s research is a dedication to telling the stories of Hijra through an ethnographic lens in a time period where Hijra are marginalized by society and their lives are highly impacted by identity, kinship, and economic value.

As the historiography of Third Gender individuals in South Asia continues, I hope to expand on the modern day consequences of the disenfranchisement: the oversimplification of gender identity that created the Hijra label, the alliance between Hijras and Hindu nationalists, and the continued push to assert “transgender” rights over same-sex marriage and relationships in India. While some of the historical works have focused on a post-colonial movement against Section 377 and the Criminal Tribes act 1871, they lack an analysis on how the 21st Century reactionary Hijra pair themselves with religious nationalists and those on the far-right that alienate same-sex attracted individuals. Many of these pieces of scholarship discussed trans and gay individuals as separate communities which has manifested into the politics of Indian society rather than sharing a similar history and a continuing narrative of betrayal despite allyship between all queer people across the globe. Additionally, research beyond Northern India and Pakistan must be done to tell a more diverse story of how these identities originally were disenfranchised.

During the Mughal period, there were structures that allowed local princes and royalty to assert power in the 16th to the 19th Century; one of these structures was known as the Mughal Court that was a form of rules and laws. In the Mughal Court, there were significant expressions of hierarchy and control that were asserted through royalty in the palace. In these spaces, eunuchs “served as another element in this formation of space, as embodied boundaries and mediators”.[10] There were individuals who served roles to ensure safety and security of the leader, meaning that private spaces within the harem or sleeping quarters must have been kept. Despite this, eunuchs of different privileges and levels within the hierarchy had powers to enter these spaces.

Before they were able to attend to these responsibilities and tasks, young Khwajasarai needed to prove that they were ready to assume adult responsibilities. Unlike other youth in society who could access more responsibilities through the process of puberty, “that competence in adab [Islamic values of proper manners and conduct] was a significant marker of adult-hood may have broader relevance, particularly for male Islamic childhoods.”[11] This would cite how differently treated third gender individuals were even if they had status in the court because they were forced to follow good manners and proper conduct with standards above their peers. Additionally, this led to “acculturation and kinship-making were broadly speaking part of the experiences of slave children in early-modern and modern South Asia” where “forming cultural and interpersonal links appears to have been an important way in which child slaves coped with their enslavement and deracination.”[12] The young Khwajasarai were held to higher standards and taken from their homes at young ages to serve the court; community within the court by third gender individuals was needed for survival and assimilation where they formed new cultural ties and personal relationships. 

            Within the Mughal Court, there were different positions for the Khwajasarai. The third gender people served the Court but also participated in the harems, “a sacred area around a shrine; a place where the holy power manifests itself”.[13] There were some “personal attendants (khawāsān) and palace eunuchs (mahallīyān)” that would be “present behind the emperor” along “with the nāzir (eunuch superintendent of the household) also flanking the emperor on stage left… the master of the ceremonies (mīr tūzuk) stands in front of the emperor, behind the most powerful Mughal state officials such as the wazīr al-mamālik, with mace-bearers (gurz-bardārs)”.[14] While these individuals did not hold the highest position within the courts, the significance of their inclusion behind the emperor shows the power structures that had been established to demonstrate their significance. Additionally, there were some eunuchs that have been shown throughout historical preservations such as the narration of Mahābat Khān’s coup that demonstrate how these individuals were “stationed in proximate positions close to the emperor and around the more restricted parts of the palace.”[15] The freedom of movement with little restriction is an important note for any person who exists throughout time and the ability for the Khwajasarai to have agency is notable to their own power and significance in each court. Even with unequal power dynamics because of a social hierarchy that was built, eunuchs were still able to exist in close proximity to individuals of higher stance and had the possibility to move up the power structure into nobility. Often their stories are recorded in archival highlights another display that these individuals held strict importance, even while it ranged, within society.[16]

            The duties of the Khwajasarai would change based on the Court context because there were no fixed tasks placed on individuals or strict caste divisions for these tasks, but certain privileges could be denied to others based on status within the Court. Some examples of this blend of power include the fact that “a water carrier could (and did) write a memoir, a foster nurse could serve as a diplomat and a swordsman could be a storyteller, however strict the codes of conduct that they were expected to follow.”[17] Agency and movement based on proximity to the emperor did not limit your duties because anything could be significant in service to the Court.

            The Khwajasarai had a close proximity to the emperor through their ritual practices. In formal public spaces and the inner areas of the palace, they still were able to take up space. Depending on the position of the Khwajasarai, they served “on practical functions, such as holding fans, passing on petitions, or standing guard” that were essential to the success of the hierarchy.[18] For some functions of the court, Khwajasarai were able to “achieve positions of intimacy, knowledge, and influence with the emperor and members of the royal family” and throughout time, there have been numerous eunuchs of high status that could be a part of close encounters with the royal family. Depending on the emperor, the prominence of the Khawajasarai changed, however, one thing stayed consistent: the gesture to forbid castration of young boys. However, “ ‘all Mughal emperors from Akbar down to Awrangzeb… no one had previously issued an injunction against a practice that had enslaved young boys and turned them into eunuchs without their consent”.[19] This reasserts how third gender individuals were enslaved and seen as necessities to the functionality of the Court.

Despite their differences in gender presentation or the status of being an eunuch, there were a range of opportunities while also still having low-ranked eunuchs that “could become entangled in moments of political conflict, intrafamilial and otherwise, a situation that provided greater opportunities but also heightened risk”.[20] There would also be women and non-eunuch males serving in similar positions throughout the palace, but significantly, the Khwajasarai were not excluded from practices that were held by those other than male or female. With the exclusion of Khwajasarai in the history of India and the Mughal Court, a significant part of the diversity of gender and status can be erased.

            With colonist intervention, the historical significance and positions of the Khwajasarai within society were erased in the British colonial period. By being a third gender identity, the Khwajasarai caused significant moral panic to the British colonizers but have also been left out of the exploration of Indigenous gender identities throughout history. Hinchy made it clear that “the majority of archive life histories were recorded as accounts of ‘eunuchs’, not ‘Hijras’. It is sometimes difficult to distinguish those ‘eunuchs’ who identified as Hijras from those who found themselves categorised as ‘eunuchs’ because they crossdressed in theatrical or ritual contexts, or because their everyday gender expression was non-binary”.[21] Any usage of the term Hijra that provides quotes or evidence from Hinchy’s monograph, Governing Gender, will be italicized as her politic and understanding of third gender identity had changed between her monograph and prior works. The italicization represents how Indigenous terminology was stripped during the colonial period but Hijra was a term that can not fully represent all individuals who were discriminated against in regards to their sexual or gender presentation. Hinchy illustrated how the “British East India Company’s interventionist policies towards Indian-ruled principalities intensified, setting the stage for Awadhi khwajasarais to become embroiled in the sexual politics of imperial expansion.”[22] The Khwajasarais were seen as a threat to colonial rule because of their knowledge, traditions and practices of kinship-making and community building; they could easily resist the policing that was provided by colonial power.

To continue to assert control over the region and to force individuals to conform to Western standards, there were high levels of policing. Laws such as the Criminal Tribes Act of 1871 and Section 377 of the Indian Penal Code targeted the Khwajasarai disproportionality. During the period of policing, “colonial law marginalised diverse domestic and kinship forms that offended Victorian sensibilities”.[23] The Indian Penal Code was a part of legislation that was drafted in the 1830s by Thomas Babington Macaulay but was not enacted until the 1860s; the Code was inspired by English laws and the British’s need to codify their own standards across their colonial territories. Section 377 of the Code, part of Chapter XVI that related to punishment of Sexual Offences including rape and Sodomy. The extent of “the imprisonment (for life, or alternative up to ten years) of ‘[w]hoever voluntarily has carnal intercourse against the order of nature with any man, woman, or animal and specified that penetration was ‘sufficient’ to constitute ‘carnal intercourse’”.[24] Because the author focuses on the policing of the performances, she stressed the importance of the Badhai, Hijra’s traditional performance, as an event that is held during weddings or childbirth. The moral panic that came from the British colonial officers and British masculinity was pushed on Hijra.

Hijra were targeted between 1850 to 1900 while sexuality was being regulated by Section 377. Hinchy described the significance of the term Hijra in modern day Pakistan and Northern India, placing the Hijra in a particular region rather than creating the idea that third gender people in South Asia all identified the same way. Instead of a gender overview of the Hijra, Hinchy provided information on how “the North-Western Province (NWP) government, the contested representations of Hijras in official archives, were a troubling hurdle to suppressing the community and demonstrated the failure of colonial intelligence collection”.[25] The colonial government failed to execute many of their actions towards prosecution because of their lack of understanding of Hijras.

The major shift in the attitudes towards penetration and the development of Section 377’s power from different court cases throughout the 19th Century. Section 377 was defined by courts to prosecute individuals and the process of defining what penetration was. Ultimately, a “judge in the Brother Anthony case also concluded without adducing any reasons, that of the sexual perversions he had originally listed, only sodomy, buggery, and beastiality would ‘fall into the sweep’ of Section 377.”[26] Vanita described how the British were able to assert their power through their own gender and sexual expectations but were able to figure out more about Indian affairs to these issues through British men keeping mistresses. One of the first strategies that they worked on was dividing Hindu and Muslim religious customs and standards but ultimately, judges decided on a standard definition for what the Section criminalize. Vanitas’ work that explored third gender individuals focused more on the cultural presence of “transvestites” in theatre. An emphasis is placed on how Hijras re-invented the culture of theatre but were ultimately held back due to that taboo placed upon them because of their connection to women and womanhood. The author did choose to focus on how Hijras used Hindu myth to create aspects of their womanhood; early in their prosecution, Hijras were using justifications for their existence such as their morality due to the connections to Hinduism.[27]

            For the British colonizers, it made logical sense to find a way to oppress the non-confirming individuals to better enable all conformity throughout India but they first had to identify who they were and how they did not fit into society. For a thirty year time frame from the early 1870s to the beginning of the 20th Century, “Hijras seen as pimp, dancer, bard, performer, indefinite and non-productive/miscellaneous and disreputable.”[28] Western standards of productivity and value in society were vastly different from Indian perception. The Khwajasarai and other Indigenous third gender identifying individuals existed within India for centuries, creating a clash of culture and values.

This legislation addressed “Unnatural offenses” which were used as a direct target on Khwajasarai as they did not fit British standards for gender presentation and was an active threat to patrilineal order that colonial law wished to instill throughout India. As this legislation was being revised, the presence of Hijras were becoming more relevant to colonial rule, especially throughout the 1850s, as they were represented as the symbol of Indian sexual perversity. By the time that this Code was officially placed into law, Hijras were seen as a danger to children and therefore, a greater threat to society.

While Hinchy supported the Khwajasarais’ original power structures, an examination of the transfer of power was also made. With the influence of colonial government, the “definitions of the family and notions of sexual respectability narrowed greatly” and “evangelical ideology produced new middle-class definitions of the ‘private’ sphere of the household as a domestic and feminine domain, demarcated from the masculine ‘public’ sphere.”[29] Hinchy examined how class and perceptions of gender identity changed rapidly due to new government structure under British rule.

            Similarly, The Criminal Tribes Act of 1871 was further legislation that helped to criminalize Hijra in Indian society. It sought to criminalize the Hijra as kidnappers, castrators, and sodomites which were already punished through the IPC.[30] The CTA allowed for the prevention of physical reproduction from Hijra, forced cultural elimination, removed children from Hijra care, further criminalized actions that Hijra sought to create a means of livelihood, and created interference with Hijra succession practices. Overall, the Hijra found their identities as well as domestic arrangements policed by the CTA. Police officials were allowed increased surveillance powers that allowed them to control the public presence of Hijra as well as conduct investigations onto “registered people’s” [people who were suspected to be eunuchs or similar classifications] households, removing children from their care by force.[31] Despite the various lives that Hijra lived, their gender presentation, domestic arrangements, and entire livelihoods were policed by the colonial government. Another form of control of colonial oppression to the Hijra was prevention of physical reproduction, interference in Hijra discipleship or succession practices, removal of children and complete cultural elimination.[32] The colonial government attempted to deal with the “issue” of the Hijraby ensuring there were no generational communities that could support rebuilding or maintaining their culture. The British viewed Hijrasas “figures of failed masculinity” that dressed as women and believed that Hijra needed to be eliminated from society to protect the image of proper masculinity according to Western standards.[33]

Hinchy discusses that sexuality and gender identity were unevenly disciplined on a local level.[34] This examination of how colonial law was drastically different based on location in colonial India was newer in the research of transgender identities. By explaining the complex relationship with law and policing, Hinchy described the agency the Hijras had to resist discrimination and described how they used strategies to negotiate and used gaps in control to evade punishment.[35] The turn towards providing agency and highlighting resistance is powerful for this time period where transgender individuals are asserting their own agency in the modern day.

Overall, this specific article makes a lot of commentary about how the lack of enforcement of policy allowed the community to perform and cross dress without policing or made decisions to move to Indian-ruled states to continue their practices, being strategic about their political borders to maintain identity.[36] For the first attempt to move to assert agency towards queer individuals in colonial India, the author used the original terminology of the Hijras to restate political power to that label that had been stripped from that marginalized group in the 19th Century.

            After decades of cultural erasure and violence towards the Indian population, South Asia became liberated from the colonial presence in 1947 after the end of World War II through the Indian Independence Act. A main part of the liberation movement was a commitment to anti-colonial nationalism by the newly established Indian government and to implement international pressure on Britain to decolonize after WWII. The anti-colonial movement focused on reclaiming or assembling parts of Indian culture that had been erased because of Western standards. Embedded into the Indian Constitution which was created in 1949, secularism was enshrined because the Indian National Congress, a political group that had led parts of the liberation movement favored an India that would maintain religious diversity. However, other political interests such as Hindu Nationalists were frustrated, wanting to create a country that would strictly follow Hinduism and reclaim the territory for people who followed the religion and exclude Muslim Indians. Despite the conflict that built in India over religious rights, new leaders of the Indian government quickly overturned some elements of harm that had been done to the region under British rule. The Criminal Tribes Act of 1871 was one of the major laws examined and was repealed the same year that the Indian Constitution was created in 1949. Even though TCA was overturned, Section 377 of the Indian Penal Code that created anti-Sodomy laws, targeting homosexuals was not removed. Some progress was made for third gender individuals but not homosexuals.

            While the Indian National Congress remained in power for five decades after initial liberation of India, the late 20th Century saw a rise of neo-liberalism and Hindu nationalism in India. Neo-liberalism not only opened up the Indian economy to foreign capital and privatization but created economic deregulation and led to rising wealth inequality in India during the 1990s.

Hindu Nationalism took advantage of the rise of neoliberalism that took attention from class conflict which divided working-class people and led to a major deflection of class-driven anxieties onto minority communities. The influence of Hindu nationalism would continue to strengthen within Indian electoral politics in the 2000s.

            As a country within the Global South, India continued to see the effects of colonialism in the 2000s. Elements of right-wing populism emerged in India focused against cultural globalization despite the onslaught of Western culture on India and Hinduism. Through the 2014 and 2019 elections, the BJP won the largest majority asserting Shri Narendra Modi into the role of Prime Minister. Alongside these changes the 21st century has seen various attempts to legalize a recognition for a third gender classification, but more specifically, to recognize the Hijra. In 2014, there was a dramatic shift in the legal recognition despite previous rulings that had supported the anti-sodomy laws created in the 19th century that discriminated against homosexuals and targeted the Khwajasarai. The UK Constitutional Law Association published commentary in reaction to the ruling of National Legal Services Authority v Union of India (NALSA). This decision from 2014 declared that Hijras must be legally treated as a third gender classification group. The commentary stresses how the decision is a direct contrast to previous decisions even a year prior that ruled sodomy, which the Hijras have historically been associated with, as criminal action.[37] The new protections granted to Hijras include recognition as the third sex, right to choose their presentation/classification, and are now granted affirmative action privileges for being a group historically discriminated against. PM Modi, who would express intent on protecting transgender individuals throughout his time in power, assumed office in May of 2014, a month after the publication of this review of the decision and the decision itself.

            A few years following the establishment of the legal gender classification, the Indian Supreme Court Overturned Section 377 of the Indian Penal Code. One of the organizations that commented on this historical ruling was The Human Right’s Campaign also known as THC. THC is an international organization that has its origins in Washington D.C. and was founded in the 1980s. Throughout its expansion, the Western organization has taken interest in ensuring universal protections for LGBTQ+ identities across the world. After the decision that overturned Section 377 of the IPC, before official removal from the IPC from law in 2024, the organization celebrated this historical win in 2018. HRC Global Director Ty Cobb acknowledged the historical win by congratulating “the LGBTQ advocates who worked tirelessly for decades to achieve this tremendous victory. We hope this decision in the world’s largest democracy and second most populous country will set an example and galvanize efforts to overturn similar outdated and degrading laws that remain in 71 other countries.”[38] For the Western audience who has little knowledge of the historical legacies of colonial Britain’s enforcement of anti-Sodomy laws, the announcement by the HRC does not outline how important yet complex this accomplishment is, allowing for Westerners to believe that progress has been completed. The commentary notes that the IPC is a result of colonial rule enforced in 1860 and emphasized its criminalization of adults of the same sex but not the historical usage to disempower third gender identities. The organization includes that it was PM Modi’s decision to allow the Supreme Court to make this decision and chose not to directly associate his government with the case.[39] An Indian member of the HRC staff, who worked to support the case’s argument, was quoted in the article addressing the affirmation of the right to one’s body and the right to love but the language is vague towards the audience, not claiming to be a clear protection for transgender people.

            Despite efforts to receive certain status for recognition, there is still a lot of conflict within the Hijra community. Because there are different ways to contribute to the economy and receive payment, many Hijra argue over the “better” way to interact with the public and that begging should be considered shameful. Some Hijra go as far as to consider that Hijra who “beg on trains [are not real Hijra]. They have no honor and are just gandus’” because they belong “to these old, respected, established hijra households with large numbers of celas and nati-celas, whose right to take money at weddings and child births were undisputed, even protected by the police”.[40] Even with the difference of opinions on how to receive payment, Hijra who ask for money on trains provide a transaction through blessings and “work very hard to earn their money… Getting ready entailed bathing, putting on makeup, wearing clean, gaudy saris, and hiding large pins”.[41] There is a sacrifice for Hijra who put themselves in harm’s way for these transactions and labor is performed through negotiations beyond the daily efforts to get ready and wear proper attire for the day.

            As legal recognition has expanded, it can be noted that a tactic for Hijra’s to be accepted into Indian society is through collaboration with the Bharatiya Janata Party. The BJP is a political party that currently controls the Indian government and is one of the major political parties.In 2019, the BJP released a manifesto that explained their political agenda for the next several years if elected. The BJP had been in national political power since 2014 after not having won the position of Prime Minister for their party since 1999. The Prime Minister until 2019 had made vague statements of his support for the Hijra or transgender community, but in this manifesto, the party makes a clear stance in protecting the security of the “transgender community”. The manifesto’s section header is entitled “Empowering Transgenders” is a strong commitment to maintaining and re-asserting power dynamics that third gender identities in India had previously had. One promise embedded in the BJP’s policy is a commitment “to bring[ing] transgenders to the mainstream through adequate socio-economic and policy initiatives.”[42] Despite being a far-right, religious nationalist party, it has prioritized the protections of an universally marginalized identity across the world.

The act, The Transgender Persons (Protection of Rights) Act of 2019, is defined as “an Act to provide for protection of rights of transgender persons and their welfare and for matters connected therewith and incidental thereto.”[43] The protections granted in the legislation extend to the entire country of India and need to be enforced by the central government. The main protections address non-discriminatory access to education, employment, healthcare services, the right to property, and stop denying transgender people from being denied public office. Transgender people can also have formal recognition of their identity as a transgender person and the legislation lays out the application for recognition. This legislation was passed in 2019, the same year as the re-election of the BJP and PM Modi’s solidification of power for another five years. Earlier that year, they had released their manifesto that had addressed making transgender people “more mainstream” and this protection act is a commitment to their protections of that community. The amount of individuals that have benefited directly from this legislation continues to be examined, but the limitation of progress can be noted by the unwillingness to use words native to the South Asian continent.

However, there still remains push back from a diverse range of voices that believe that using Western terms, instead of Indigenous ones like Hijra, limit the amount of progress that can be made. Individuals of high caste privilege such as Laxmi Narayan Tripathi, believe that supporting Hindutva and justifying the caste system guarantees her safety as a third gender individual. In advancing her own political agenda to support third gender rights, she has actively excluded many from the narrative and continued the oppression of other marginalized groups within India. She has joined Hindutva politics to argue that the Temple of Ram, needed to be rebuilt after its destruction by the Mughal Empire despite the direct marginalization it has put on the Muslim minority within India’s borders. Others, usually from marginalized castes or religious backgrounds, believe that there is more to the fight for equal rights than what Tripathi has proposed. There are other ways to assert rights for Hijra and third gender individuals than sympathizing with an oppressive government that does not listen to the specific and diverse struggles amongst non-men and women.

            Beyond the Hijra and other third gender identities throughout South Asia, there are hundreds of Indigenous cultures that centered non-male or female individuals in their communities throughout history and the present. As educators of culture, religion, and World History, social studies teachers have a duty to discuss the diversity of gender presentation and why certain individuals are discriminated against in their modern societies. No matter their race, transgender and third gender individuals deserve opportunities to see themselves throughout history. Often, rhetoric is used to insinuate that trans and gay people are “new” and have not existed for centuries but teaching third gender individuals across cultures, continents, and races can fulfill the mission of getting rid of a Euro-centric curriculum. With a significant increase of Asian American, specifically South Asian Americans, in the United States, a curriculum that increases visibility into Asian culture and life is essential. Supporting students who are immigrants or first generation students from South Asia starts with making their cultures visible in the classroom and remains even more true for students with multiple identities that are marginalized such as being trans.

A social studies classroom can choose to reflect on this country’s contribution to colonizing nations in the Global South and address how the country founded on colonization with the attempt to remove, displace, and harm Indigenous populations. In the United States, trans rights are constantly being debated and movements for cisgender queer individuals to separate themselves from associations with transgender people. Anti-trans hate and legislation has spread around the United States within the last ten years, especially with the rise of Christian nationalism and the alt-right Conservatism. The ways that India and the United States have manifested various beliefs of acceptance and legal recognition for different queer groups shows the complexity of gender history. An introduction to this topic in our classrooms can help to create conversations within the diaspora but also a reflection to students of European descent, those who have to reflect on their privilege and ancestor’s role in removing acceptance for third gender individuals.

ABC News (Australia), “Gender and Sexuality in Hindu Mythology | India Now! | ABC News,”  (ABC News: 2022) https://www.youtube.com/watch?v=K8ZZAD9FhTw.

Bharatiya Janata Party. “Manifesto 2019”. 2019.

British India. “The Indian Penal Code”. 1860.

Bureau, The Hindu. “Supreme Court Rejects Review of Its Same-Sex Marriage Judgment.” The Hindu, January 9, 2025. https://www.thehindu.com/news/national/same-sex-marriage-supremecourt-dismisses-petitions-seeking-review-of-october-2023judgement/article69081871.ece.

Chisholm, Jennifer. “Muxe, Two-Spirits, and the Myth of Indigenous Transgender Acceptance.”

International Journal of Critical Indigenous Studies 11, no. 1 (2018): 21-35. doi: https://doi.org/10.5204/ijcis.v11i1.558.  

https://login.tcnj.idm.oclc.org/login?url=https://www.proquest.com/scholarly-journals/muxe-two-spirits-myth-indigenous-transgender/docview/2917322843/se-2

Hinchy, Jessica. 2015. “Enslaved Childhoods in Eighteenth-Century Awadh.” South Asian History and Culture 6 (3): 380–400. doi:10.1080/19472498.2015.1030874

Hinchy, Jessica. 2019. Governing Gender and Sexuality in Colonial India : The Hijra, c.1850-1900. Cambridge University Press.

Hinchy, Jessica. “Obscenity, Moral Contagion and Masculinity: Hijras in Public Space in Colonial North India.” Asian Studies Review 38, no. 2 (2014): 274–94. https://doi.org/10.1080/10357823.2014.901298.

Hinchy, Jessica. “The Sexual Politics of Imperial Expansion: Eunuchs and Indirect Colonial Rule in Mid-Nineteenth-Century North India.” Gender & History 26, no. 3 (2014): 414–37. https://doi.org/10.1111/1468-0424.12082.

Höfert, Almut, Matthew M. Mesley, and Serena Tolino, eds. Celibate and Childless Men in Power : Ruling Eunuchs and Bishops in the Pre-Modern World. Abingdon, Oxon ; Routledge, an imprint of the Taylor & Francis Group, 2018.

Journeyman Pictures, “Demigods: Inside India’s Transgender Community.” (2019).

Kalb, Emma. “A Eunuch at the Threshold: Mediating Access and Intimacy in the Mughal World.” Journal of the Royal Asiatic Society 33, no. 3 (2023): 747–68. https://doi.org/10.1017/S1356186322000827.

Khaitan, Tarunabh. “NALSA v Union of India: What Courts Say, What Courts Do.” UK Constitutional Law Association, (2014). https://ukconstitutionallaw.org/2014/04/24/tarunabh-khaitan-nalsa-v-union-of-india-what-courts-say-what-courts-do/

Republic of India Parliament. The Transgender Persons (Protection of Rights) Act.

Peters, Stephen. “India Supreme Court Overturns Colonial-Era Law Criminalizing Same-Sex…”(Human Rights Campaign: 2018). https://www.hrc.org/press-releases/india-supreme-court-overturns-colonial-era-law-criminalizing-same-sex-relat.

Saria, Vaibhav. Hijras, Lovers, Brothers: Surviving Sex and Poverty in Rural India. Oxford: Oxford University Press, 2023.

Vanita, Ruth, ed. Queering India : Same-Sex Love and Eroticism in Indian Culture and Society. New York ; Routledge, Taylor & Francis Group, 2002.


[1] Journeyman Pictures, “Demigods: Inside India’s Transgender Community,” June 15, 2019, https://www.youtube.com/watch?v=YxL5qfbtKqg.

[2]  Jessica Hinchy. “The Sexual Politics of Imperial Expansion: Eunuchs and Indirect Colonial Rule in Mid-Nineteenth-Century North India.” (Gender & History, 2014), 416.

[3] Hinchy, “The Sexual Politics of Imperial Expansion”, 420.

[4] Saria, Hijras, Lovers, Brothers, 3.

[5] Journeyman Pictures, “Demigods: Inside India’s Transgender Community.” (2019).

[6] Vaibhav Saria. Hijras, Lovers, Brothers: Surviving Sex and Poverty in Rural India. (Oxford: Oxford University Press, 2023), 4.

[7] ABC News (Australia), “Gender and Sexuality in Hindu Mythology | India Now! | ABC News,” June 27, 2022, https://www.youtube.com/watch?v=K8ZZAD9FhTw.

[8] Saria, Hijras, Lovers, Brothers, 4.

[9] Jennifer Chisholm. “Muxe, Two-Spirits, and the Myth of Indigenous Transgender Acceptance.” (International Journal of Critical Indigenous Studies: 2018), 25.

[10] Emma Kalb. “A Eunuch at the Threshold: Mediating Access and Intimacy in the Mughal World.”, (Journal of the Royal Asiatic Society 33: 2023), 752.

[11] Jessica Hinchy.  “Enslaved Childhoods in Eighteenth-Century Awadh.” (South Asian History and Culture: 2015), 393.

[12] Hinchy. “Enslaved Childhoods in Eighteenth-Century Awadh”, 394.

[13] Almut Höfert, Matthew M. Mesley, and Serena Tolino, eds. Celibate and Childless Men in Power : Ruling Eunuchs and Bishops in the Pre-Modern World. (Abingdon, Oxon: 2018), 96.

[14] Kalb. “A Eunuch at the Threshold”, 756.

[15] Kalb. “A Eunuch at the Threshold”, 755.

[16] Kalb. “A Eunuch at the Threshold”, 753.

[17] Höfert, Almut eds. Celibate and Childless Men in Power, 100.

[18] Kalb. “A Eunuch at the Threshold”, 756.

[19] Höfert, Almut eds. Celibate and Childless Men in Power, 103.

[20] Kalb. “A Eunuch at the Threshold”, 768.

[21] Hinchy, Governing Gender and Sexuality in Colonial India, 143.

[22] Jessica Hinchy. “The Sexual Politics of Imperial Expansion: Eunuchs and Indirect Colonial Rule in Mid-Nineteenth-Century North India.” (Gender & History, 2014), 414.

[23] Hinchy, “The Sexual Politics of Imperial Expansion”, 420.

[24] Hinchy, Governing Gender and Sexuality in Colonial India, 52.

[25] Jessica Hinchy. Governing Gender and Sexuality in Colonial India : The Hijra, c.1850-1900. (Cambridge: Cambridge University Press, 2019), 119.

[26] Ruth Vanita ed., Queering India : Same-Sex Love and Eroticism in Indian Culture and Society (New York: Routledge, 2002), 22.

[27] Vanita, Queering India, 171.

[28] Hinchy, Governing Gender and Sexuality in Colonial India, 41.

[29] Hinchy, “The Sexual Politics of Imperial Expansion”, 420.

[30] Hinchy, Governing Gender and Sexuality in Colonial India, 107.

[31] Hinchy, Governing Gender and Sexuality in Colonial India, 2.

[32] Hinchy, Governing Gender and Sexuality in Colonial India, 93.

[33] Hinchy, “Obscenity, Moral Contagion and Masculinity”, 284.

[34] Hinchy, “Obscenity, Moral Contagion and Masculinity”, 276-277.

[35] Hinchy, “Obscenity, Moral Contagion and Masculinity”, 277.

[36] Hinchy, “Obscenity, Moral Contagion and Masculinity”, 286.

[37] Tarunabh Khaitan. “NALSA v Union of India: What Courts Say, What Courts Do.” (UK Constitutional Law Association: 2014), 2.

[38] Stephen Peters. “India Supreme Court Overturns Colonial-Era Law Criminalizing Same-Sex Relationships”, (Human Rights Campaign: 2018), 1.

[39] Peters, “India Supreme Court Overturns”, 2.

[40] Saria, Hijras, Lovers, Brothers, 109.

[41] Saria, Hijras, Lovers, Brothers, 115.

[42] Bharatiya Janata Party. “Manifesto 2019”, 36.

[43] “The Transgender Persons (Protection of Rights) Act”, (Republic of India Parliament: 2019), 1.

The Minoans: The Forgotten Sea Empire

How can I ignite a passion for history in my students? That’s a question I found myself asking when I was teaching at Trenton Central High School while doing my first clinical experience at The College of New Jersey. Naturally, I began with looking back at my high school teachers trying to remember what they did that allowed me to not just passively learn, but to explore my interests as well. The paper that follows this introduction was my capstone paper I wrote while at TCNJ studying history. It covers a people known as the Minoans. These seafaring people of the Bronze Age are not likely to be found in any high school history textbook. However, I decided to write about the Minoans in such length because of a project I did in my English class in high school. (Yes, you read that right, my English class.)

            My English teacher at the time, Ms. Lutz, had allowed the class to do a presentation on a topic of our choosing. As a person who found English to be very boring and history much more interesting this project excited me as I was able to dive deeper into a topic I was interested in. I ended up settling on the Minoans as I had only heard their name once briefly in a video discussing Crete. Ms. Lutz’s English project allowed me to have choice in my learning all while developing my presentation making skills and teaching me how to do proper research. If the goal of your lesson is to develop student research and presentation skills then focus on that. Students will be much more willing to speak in front of the class if they are passionate about the subject. That little bit of research at the high school level might even turn into a capstone paper one day. So why is this important? How does this help me create passion for history in the classroom? Give your students some agency in what they learn. Let them tell you what they find interesting about U.S. or world history and let them explore that interest in your class. This also shows us history does not have to be confined to the history classroom instead other subjects can use history as a backdrop to explore concepts and develop new skills.

During the Bronze Age trade flourished in the Mediterranean. Few people were as well situated to capitalize on this fact then the inhabitants of the Island of Crete. The people of this island during the 3rd to 1st millennia B.C.E.E. are known to modern historians as the “Minoans”. Who were the Minoans and what did they do? The Minoans really excelled at creating high quality products. At first mastering pottery allowed them to create vessels for holding agricultural products like olive oil. When faced with a lack of valuable metals and materials like copper and tin, on the island, they were forced to turn to trade to get rarer resources. This trade centered around providing olive oil and other goods in exchange for these precious resources which could be used in the creation of desirable specialized products. This operation eventually expanded to become an intricate sea trading network that encompassed large portions of the Mediterranean and beyond. Minoan products have even been found as far as the Indus River Valley. However, material goods were not the only thing traded by the Minoans. Culture was readily exchanged as well both willingly and as a side effect of trade. Minoans managed to spread their culture while incorporating elements from foreign cultures that proved beneficial. While much information about the Minoan civilization has been lost to history, the vastness and importance of their trade empire economically and culturally can not be overstated. Many civilizations of this time like the Phoenicians, Sumerians, and the Harappans of the Indus rRiver valley tend to overshadow the Minoans but they should be seen as cultural equals to these complex societies. Their central geographic location, coupled with a need to trade for raw materials as well as fostering skilled artisans enabled the Minoans to become a Bronze Age thalassocracy with influence on many civilizations.

The Bronze Age in Crete is generally considered to have lasted from around the 3rd millennium B.C.E. to the 1st millennium B.C.E.[1] The Minoans received exposure to metallurgy and bronze making from the east. The Island of Crete is the largest in the Aegean Sea and also the furthest south. This geographical position made Crete a natural stop on the many trade routes of the Mediterranean. Crete was perfectly positioned to receive sea trading merchants from all their neighbors. Mainland Greece to the northwest, the Cyclades to the north, Anatolia to the northeast, Egypt to the southeast, Cyprus to the east, and even further east Syria. This places Crete in the middle of some of the most important civilizations of the Bronze Age. The innovations of the Bronze Age first began in the east and it is no wonder how the Minoans gained access to this knowledge. While the Minoans were influenced heavily by the cultures that they came in contact with, the Minoans developed a distinct culture of their own. This is contrary to what historians of the past once believed. Historians used to think that the Minoans were not a distinct culture but instead a more of a imitator of Anatolian, Syrian, and Egyptian customs. This can not be further from the truth; instead the Minoans created a highly advanced culture which spread its influence to the furthest reaches of the known world at the time.[2] 

Even in the 21st century when writing about Minoan cultural spread, archeologists like Cyprian Broodbank and Evangelia Kiriatzi write that the “Minoanization” of surrounding islands and the Mediterranean remain controversial.[3] Cultural spread was not the only highly contentious aspect of the Minoan civilization. An article by Chester G. Starr really exemplifies how some scholars used to feel about the Minoans having large influence in the Mediterranean or even the Aegean. Writing in 1955 Starr confidently writes off the Minoan thalassocracy stating

The Minoan thalassocracy is a myth, and an artificial one to boot. It is amazing that the patent falsity of the basic idea has never been fully analyzed, for neither logically, archaeologically, nor historically can the existence of a Cretan mastery of the seas be proved.[4]

As the history of the Minoans becomes more and more clear through archeological finds Starr’s article appears more and more outdated. While he recognizes the fact that trade between Crete and Syria as well as trade between Crete and Egypt existed, he heavily downplays the Minoan involvement in this trade. Proclaiming instead that Minoans were nothing more than intermediaries between great powers.[5] Starr even writes off Minoan control of the Aegean by saying that they would not be able to field the required number of ships.[6] The idea of Minoan colonies is also completely downplayed as nothing more than a few factories created by Minoans for native populations of those islands to gather and produce products.[7] Early and mid 20th century historians certainly did not see the Minoans to be as capable as they were.

In 1962 an article by Robert J. Buck continues to echo this sentiment. Buck writes “No matter how prosperous Crete may have been, there was simply no place in the Late Bronze Age for a Minoan thalassocracy.”[8] His reasoning is that Crete did not have the industry capable of producing enough goods for a large overseas market.[9] It was not until the 1990s that scholars began to find more evidence that Crete could have held an empire of the sea and the Minoans were their own advanced culture.[10] Today the topic is still debated and the true scale of the Minoans’ influence is not completely clear. Evidence gathered in this paper however points to the existence of a heavily influential Minoan thalassocracy.

Trade was what built this Empire and was the primary way that Minoans spread their culture. The geographic location of Crete was not the only factor that led to the Minoans creating a trade empire. The Minoans had access to plentiful land to produce agricultural products in large quantities. Grapes, olives, pears, etc. were vital to the Minoan economy and way of life. Grapes were used to produce wine and tablets found at Knossos, the Minoan capital, reference 420 grape vines in the area and tablet “GM 840” records over 14,000 liters of wine that were brought to Knossos as a product of the last harvest.[11] Olives were also fundamental to the people of Crete and the Mediterranean and were always in high demand. Olives and olive oil took a long time to spoil, were used in cooking, washing oneself, burned in lamps, and were used as body oil. Olives were enjoyed both in their pressed oil form and regularly eaten without being pressed. These many different uses for olives made it a major crop of the Minoan economy. More tablets found at Knossos document 9,000 liters of olives being produced in just the Dawos area of the Messara plain of Crete.[12] Pears were also grown and might have even been native to Crete with Minoan trade being the reason the fruit spread throughout the Mediterranean.[13] While having an abundance of agricultural products is certainly good, the Island of Crete was lacking valuable metals that were the building blocks for societies of the Bronze age. Metals like copper, tin and gold were not found readily enough to support the demand on the island and this forced the Minoans to turn to their neighbors to acquire these metals.

Copper and tin were combined to create the alloy of bronze, a vital resource of the time. The island of Cyprus to the east was a large supplier of copper to the Mediterranean and made a perfect trade partner for the Minoans. Copper ingots from Cyprus were found at the Minoan palace-temple of Zakro confirming trade between the two islands.[14] It seems connections between Cyprus and Crete date back to the early and mid-Bronze Age.[15] Minoan pottery has been found on Cyprus in important places like palaces and ingots of various metals traded to the Minoans by the Cypriots have been found on Crete.[16] Some Cypriot pottery had even been found in the port of Kommos on Crete. All of these connections show a healthy trade relationship between the two islands. It is also clear that the Minoan and Cretans developed some kind of rapport as the Cypro-Minoan script begins to appear on traded items. The Cypro-Minoan script was a shared syllabary that the two islands utilized in trade with one another.[17] While the script remains undeciphered it allows archeologists to tell when items have come from Cyprus. Lead, copper, and tin ingots have been found bearing Cypro-Minoan markings with Cypriot lead mines being identified as far as Sardinia.[18] These are the kinds of metals that Minoans would have been in heavy need of and thus this close relationship between Cyprus and the Minoans makes sense. The Minoans would have used these metals to manufacture all kinds of various products. Cypriots were getting their lead from mines in Sardinia to trade that lead to the Minoans who then used lead to create objects that were sold overseas to places in Anatolia and Egypt. This is a perfect example of how interconnected Mediterranean civilizations were in the Bronze age and is not dissimilar to trade in the modern day.

Evidence of overseas trade is easy to spot all around Crete. For example, in the city of Myrtos imported metal objects, stone vessels, and obsidian have been found. Within the city, pottery and textiles were produced which could have been exported in exchange for these goods. Myrtos, like many Minoan cities, was located near the coast and many of these cities had their own ports and had more access to the outside world than might be expected.[19] Minoans most likely constructed their cities with trade as a central tenant. This is evident by the distribution of settlements around the island. The west side of Crete is almost completely barren of settlements while the north, south, and east have plenty of large cities. When looking at this from a trade point of view it makes sense as Minoans would have been primarily trading with the Cycledies to their north, Anatolia, Cyprus, and Syria to the east, and Egypt to the south. While Minoan pottery has been found west, in places like Malta for example, Minoans seemed to be more focussed on conducting their business in the eastern Mediterranean. Ports and harbors did not only exist in large cities. Evidence of Minoan ports have been found in many coastal regions of Crete and on nearby islands like Dia and Thera.[20] Having ports scattered throughout the sea allowed Minoan sailors to have many points where they could stop and rest. It is also crucial for long range seafaring as these journeys could be very dangerous and various weather conditions could spell disaster for ships and their crew. Having ports along the way to their destination allowed ships to take stops and wait for more favorable weather and wind conditions if needed.

            Minoans traded in many different kinds of products and were not limited to their agricultural surplus of olives and wine. In fact, skilled artisans were highly valued in Minoan society and were some of the most adept in the Mediterranean. Vathypetro, a Minoan building in the Cretan countryside, gives historians a glimpse into the industries Minoans engaged in. The building is dated to 1580 B.C.E. and had a wine press, clay loom weights for weaving, an oil press, 16 storage jars, multiple potters wheels, and a farm on the property.[21] Rodney Castleden, author of Minoans:Life in Bronze Age Crete suggests that it could have been a summer residence for the king, wealthy landowner or just as likely a communal industrial and agricultural center where Minoan artisans and farmers in the area could work. It is clear that Minoan goods were highly valued as they have been found all over the Mediterranean and beyond. Other cultures also show clear inspiration taken from Minoan frescoes and pottery showing the scale of Minoan influence. The largest potency of this influence is seen in Minoan colonies and close neighbors like the Mycenaeans. However, very proud and ancient civilizations like Egypt have shown to have respected the Minoans to a certain degree and had interest in their art and products.

            At some point Minoans began to make changes to their social structure to prioritize artisans and the manufacturer of luxury goods. This can be seen in the Minoan palace-temples. In Minoan society towns littered the countryside but in large cities there were often massive palace-temples where the elite and priests would live and in the case of Knossos a king or some kind of central authority. The main temples were located in Knossos, Kydonia, Phaistos, Zakro, and Mallia.[22]

Archaeologists have been able to discover that at some point before 1700 B.C.E. Minoan craftsmen and artisans concentrated within these temples. It seems that artisans were gathered to collectively work as full time specialists paid by the state. This proximity to other skilled specialists allowed them to share ideas and learn from each other creating ambitious works for domestic use and for transportation overseas.[23] At Phaistos within the store rooms some pithoi made by these craftsmen survived to this day.[24] Castleden describes the work made by these specialists as reaching “levels of technical skill and artistry so high that some of their works rank among the finest ever produced in Europe.”[25] It is no wonder why Minoan products were sought out all through the Mediterranean and beyond. By focusing their talents together and producing artwork that surpassed anything that their competitors were producing they found a lucrative market in luxury goods.

            With the palace-temples being the concentrated point for artisans they also became trade hubs as a consequence of both having the skilled workforce needed to use these raw materials and being the center of bureaucracy in the region. Imported materials found include silver, tin, copper, ivory, gold, lapis lazuli, ostrich eggs and plumes, exotic stones, and more.[26] These materials were then worked on by specialists at the temples where the finished products were sold both to local markets and taken by seafaring traders to foreign markets. Having the temples act as the center of industry, trade, faith, and bureaucracy as well as having five of these temples spread around the island created an efficient and administratively run government. Some early theories about the Minoan government suggested that these temples were seats of different city-states like those of mainland Greece. However, consensus now is that each temple had a local bureaucracy that controlled a portion of the island, but in the end they were all subservient to the main seat of power in Knossos. Keeping a well run and organized government is vital for sustaining a far-reaching trade empire with connections around the world and it appears the Minoans recognized this. It is very possible that Minoans understood how to organize themselves into a more centralized state by looking at the Egyptians.

            As the Minoans were looking towards the Egyptians for inspiration other less developed peoples were looking at the Minoans as an example of a developed culture. By looking at the ruins of a palace at the ancient site of Tel Kabri, located in modern day Israel, archeologists have noticed shocking similarities between this palace and Minoan palaces. For example, Minoan style fresco fragments have been found that seem to be mimicking the Minoan style. The palatial layout and construction of the palace also seems to coincide with the Minoans expanding their palaces.[27] It should not be so surprising that foreign merchants most likely visited Knossos or other palaces on Crete and were amazed at what they saw there. When they returned home the nobility of places like Tel Kabri wanted to emulate the great Minoan culture to give some kind of additional legitimacy to their own rule. This is an example of the Minoans having great influence on outside cultures without doing much to influence these people.

Additionally, by analyzing animal bones found at the site archaeologists could determine that the people at Tel Kabri started using meat cleavers to cut bone and extract marrow. This had occurred just slightly after the same development happened within Minoan society.[28] Again this showcases how trade partners of the Minoans benefited from not only the exchange of goods but also the exchange of ideas coming from Crete.

Egypt was one of the many civilizations that benefited from trade with the Minoans. This is evidenced by the many Minoan products found in Egypt. Most commonly found is Minoan pottery. Pottery from Crete has been found all over Egypt. In her article “The Perceived Value of Minoan and Minoanizing Pottery in Egypt” Caitlín E. Barrett discusses why Egyptians desired Minoan pottery and who in Egypt was buying it. Through her findings she concludes that people of nearly all strata had access to Minoan pottery and other Minoan products like cups for example.[29] Cretan pottery has been found in Egyptian homes and even graves indicating that it was used either practically or as display pieces. Essentially showcasing that they have exotic pottery from a distant land.[30] Its presence in Egyptian graves is also a strong indicator that Minoan pottery was quite well revered in some respects and that some Egyptians wanted to take it with them even in the afterlife. Minoans only imported a very small number of manufactured goods as they produced most, if not all, of these goods domestically. Of the manufactured goods imported to Crete almost all that have been found were Egyptian.[31] This really demonstrates the longstanding connection between these two cultures and the admiration they held for one another.

It can be deduced that Minoans had been visiting Egypt for many years, evidenced by the style of clothing the Egyptians portrayed Cretans wearing in their paintings. As Minoan clothing trends changed, as can be seen in Minoan artwork of themselves on Crete, these same changes are depicted in Egyptian iconography featuring Minoans. The Rekhmire paintings, located in the tomb of Rekhmire in the Egyptian city of Thebes, depict Minoan envoys wearing patterned kilts without cod pieces and a hemline sloping down towards the front.[32] Through cleaning an original coat of paint was revealed showing an older style of Minoan dress, kilts with codpieces and an upwards sloping hemline. This indicates that the Egyptians made clear efforts to update their portrayal of Minoans through the centuries. Wall paintings in the Tomb of Senmut also have Minoans depicted with the older style of outfit dating to the 1500s B.C.E.[33] At the same time on Crete in Minoan frescoes monkeys are painted blue which is a common feature of Egyptian portrayals of monkeys. A study published by the Cambridge University Press even suggests that the Minoans were the first Europeans to have contact with non-human primates.[34] The frescoes also often feature depictions of papyrus which was not grown in Crete but rather procured from Egypt. The presence of papyrus in these frescoes may also indicate Minoans trying to replicate features commonly seen in Egyptian art. These two features of Minoan frescoes can point to the Minoans certainly being influenced by Egyptian art. When added to the Egyptian portrayals of Minoans, a picture of two cultures with respect for each other and who came into contact with each other often starts to emerge.

Another piece of evidence that lets historians know that Egyptian and Minoan cultures came into frequent contact are inscriptions written by Egyptians discussing Minoans. One such inscription can be found once again in the Tomb of Rekhmire. Rekhmire was an Egyptian vizier, who was visited by the Minoans around 1470-1450 B.C.E.[35] and the inscription under a painting of the Minoan envoys reads “Princes of the Land of Keftiu (Crete) and of the isles which are in the midst of the sea.”[36] The isles mentioned most likely refer to the other islands of the Aegean. The mention of “the isles” in this inscription is good evidence that the Minoans had established colonies, trade posts, and had built an empire in the Aegean by the 15th century B.C.E. Another inscription at the base of a statue in the funeral temple of Amenhotep III lists nine place names. Four were located in Pylos, a Mycenaen kingdom and four were cities on Crete: Knossos, Amnisos, Lyktos, and Dikte.[37] The final place name was the island of Kythera which was a Minoan colony.[38] The purpose of this inscription is not entirely known but it is possible it relates to trusted trade partners or cities in which trade deals were made within Amenhotp’s lifetime.

Cultural exchanges between the Minoans and Egyptians were not entirely one sided. Some evidence from a discovery in 1991 suggests that Minoans had substantially more influence over Egyptian culture than previously thought. In Tell el-Dab’a a Minoan style fresco was uncovered depicting a bull leaping, among other things. The bull is a common trope in Minoan art work and often associated with Crete even in the present day. In an article by Sara Cole she looks into the techniques used to determine if this fresco was Minoan or Egyptian in origin. Looking at the fresco a lime plaster was used which corresponds with frescoes found in Knossos and Akrotiri, Minoan cities. In contrast Egyptian wall paintings utilized a gypsum plaster.[39]

Another indicator that this fresco is Minoan in origin are the proportions. Egyptians utilized a grid to create proportions unique to Egyptian art; they also had particular proportions for human beings. There is no evidence of these proportions or grid being followed in the Tell el-Dab’a fresco.[40]

Furthermore, there is evidence that a string was used on the wet plaster to create borders which is an explicitly Minoan technique.[41] From these observations it is clear that the fresco was created using Minoan techniques and imagery. The question that arises becomes, is this merely an imitation of Minoan art or were Minoans hired to create this fresco for Egyptians? Cole argues the latter by looking at the pigments utilized in the fresco. All the pigments utilized are common in Minoan frescoes found in Knossos and elsewhere. By looking specifically at the Egyptian blue and the elements that comprise the pigment evidence for the fresco being a commissioned work come to light. The type of Egyptian blue used in this fresco contains a copper-tin alloy which had been used for centuries by Minoans and can be found in frescos on the island of Thera and in Knossos on Crete itself. This composition for Egyptian blue is not typically used by Egyptians and instead indicates that the painters most likely brought it with them from Crete.[42] It is clear that skilled Minoan artisans were valued enough to be hired even by the great powers of the time and that these painters were specifically sought out. While historians used to believe that Minoans merely imitated the cultures around them, this fresco proves that Minoan culture was valued by others and even the Egyptians looked at Minoan art as desirable.

Another specialized art form that Minoans became masterful at was faience. Faience is glazed pottery usually decorated with paintings. Between 1700-1400 B.C.E. Minoan faience had been perfected and the Minoans were able to create polychrome faience pieces with many different inlaid colors.[43] M.S. Tite et al. in The Journal of Archaeological Science look through electron microscopes to determine the colors of the weathered faience samples that have been recovered from Crete. As a consequence of severe weathering the Minoan faience recovered is often gray, white, and brown with most of the color washed away. However, through the use of electron microscopy “bright turquoise blue, purple and violet, and pale yellow-green and greenish turquoise”[44] have all been determined to have originally been visible on these pieces. Rodney Castleden looked at the faience industry as proof of collaboration between the different artisans within the temples. He comes to this conclusion by stating that faience is a craft that utilizes the “shared experience of many different crafts [which] implies collaboration.”[45] Potters and the pot painters or even the designers of the particular faience imagery could all be different specialists who came together to create faience works of very high quality. These works could then be exported and traded for a much greater value than the material used in its construction.

Minoan stone working was also highly desired around the Mediterranean. The Minoans used stone to make vases, buckets, jars, bowls, and lamps with incredible skill. They utilized highly creative designs for example, pot lids with the handles sculpted to resemble reclining dogs.[46] They used various and sometimes exotic stones from around the Mediterranean to create colorful masterpieces. Rosso antico from the Greek mainland, white-speckled obsidian from the island of Yiali, alabaster from Egypt, gypsum, limestone, serpentine, porphyry, black obsidian from Cappadocia, basalt from the Peloponnese, and more were all used.[47] Minoans even coated some of these stoneworks in gold leaf and their stoneworkers were extremely desired by other cultures.[48] The Minoan economy depended on workers like these to make highly desirable products for foreign and domestic trade. Gathering these stones from over the Mediterranean and creating beautiful stoneworks was only possible with the centralization of artisans within the palace-temples and a vast trade network. Taking Crete’s rather meager raw resources and utilizing them to trade for specialized materials like obsidian or serpentine to create high quality in demand masterworks was the formula which the Minoan government used to become extremely wealthy and renowned.

This wealth is evident even today when traversing the ancient ruins of the Minoan temple-palaces. Large frescos and decadent architecture can be seen as well as the monumental scale of the palace. The palace would have been multiple stories high and the upper floors would have held the more extravagant rooms like dining and banquet halls. The lower floors on the other hand were relegated to housing the workshops and storerooms.[49] There were guest and service stairways as well as kitchens and pantries where food would be prepared for guests.[50] The rooms would have also been beautifully decorated with painted walls, columns, and frescos. The Minoan nobility clearly wanted to show off their wealth when designing these palaces. The layouts of the palace themselves would also often be intricate and creative with none of the Minoan palaces being the same. It is no wonder that the story of Daedalus, an extremely skilled architect, takes place on the Island of Crete. It seems that Minoan architecture over time became somewhat legendary and constructions like the labyrinth of Knossos sparked myths to grow when the Greeks conquered the island. Another interesting aspect of the Minoan palaces are that they embody both function and form. They are extremely grandiose but still hold the storerooms for the goods waiting to be exported and also the artisans’ workshops. The palaces were not just residencies for nobility but also quite literally the economic heart of the island.

Artifacts made in these workshops, like a collection of 153 silver cups and one gold cup, have been found in the ancient Egyptian town of Tôd. The Egyptian deposit in which they were found has been dated to the 1920s B.C.E. and all the cups appear to have been made by Minoans made in a style used on the island from 2000-1900 B.C.E.[51] The cups were apparently offered to the Egyptians as tribute from a Syrian king. This shows that Minoan products were found in many places and were valued enough to be accepted as tribute. Gold itself was imported to Crete from Egyptian gold mines in the Sinai, the Arabian desert, and Anatolia. Skilled Minoan craftsmen worked this gold into cups, jewelry, sword hilts, statues, and more. They then took these products and sold them overseas at a large profit. Gold cups made by Minoan craftsmen were found at a burial in Vaphio on the Peloponnese as well.[52] Examples of Minoan products made of precious metals are rare especially on Crete itself as many would have been stolen and sold or melted down at some point. That makes any examples of Minoan products like these extremely useful to know the level of expertise Minoans had when working with silver and gold.

Another valuable resource imported by the Minoans was ivory. Ivory carving was done on Crete and might have been taught to the Minoans by the Syrians whose carvings share a lot in common with Minoan examples. An example of a Minoan ivory relief carving was found in an unlooted Mycenaean tomb. The carving was probably a decoration attached to wood furniture.[53] It features a scene of marine motifs such as argonauts, seashells, seaweed and rockwork.[54] Marine motifs seem to be very common across all mediums of later Minoan art. As the Minoans used the sea as their lifeblood this makes sense. Maybe the most common way Minoans used ivory though was in the creation of sealstones. Sealstones could be made of a couple different materials like stone, ivory, or bronze, but they served an important purpose in society. Sealstones were essentially the equivalent of today’s signature for the Minoans. Every person of importance or business man would have their own unique seal.[55] Many different designs have been found on Minoan sealstones, but they often featured animals like bulls, lions, birds, or marine life.[56] They also sometimes featured common patterns at the time like the swastika.[57] For a highly mercantile society sealstones were even more pertinent. Merchants could stamp pottery with their seal so you would know who the product was from; it was essentially a Bronze Age business logo.

As other specialized crafts developed, simple pottery did as well. Castleden calls Minoan pottery “the finest… in the civilized world.”[58] Minoan pottery featured elegant designs and would often be painted with intricate patterns and swirling shapes. Kamares are just one type of Minoan pottery and features a dark background with light colored designs overtop.[59] A Minoan pithos found in Phrygia showcases an optical illusion of six conjoined heads. The viewer is only able to see around two heads at a time as the concentric lines only appear to form heads when they are in your direct eyeline.[60] This kind of design where there are images hidden in minimalist patterns is not uncommon for Minoan pottery. A jug depicting birds made out of spirals and other flowing shapes shows how Minoan painters loved to play with perception by using highly creative and arctic designs.[61] Another common feature of Minoan pottery is the marine motif. Minoans loved showing marine life, especially animals like octopi and fish. The sprawling arms of an octopus provided great ways to fill up space on the pottery.[62] The marine motifs also fit with the seafaring nature of Minoan society and Minoans would have had plenty of experience with these animals to render them correctly.

Minoan trade did not end at Mediterranean civilizations. Instead, a new study suggests that Minoans had direct trade routes with the Indus River Valley civilization. Located in areas of modern-day Pakistan, Afghanistan, and India it is clear that Minoans had quite the trading capability to be able to do business so far away. Minoans were not simple intermediaries in these trade deals, instead they were a main trade partner. This was unearthed by looking at weight measurements of each society and comparing them. Merchants trading with other civilizations would bring their weights and balance scales with them and allow these weights to be copied by the other party creating a uniform weight system between the two.[63] This practice probably started in Mesopotamia and spread from there.

Every time the weights were copied however it seems that they began to deviate from the original slightly. This made the weights a bit too heavy or too light and each time they were copied they would veer further from the original, like a game of telephone. Using this, archeologists could see which civilizations had identical weights to tell if there was a direct trade route between the two. The Minoans had four different measurements of identical weights with the Indus River Valley civilization. Some of these weights were recovered on Crete itself and some were from Minoan colonies. This shows that the Minoan colonies did actively participate in a lot of trade and that the colonies and Crete itself worked together.

The highest concentration of weights came from the city of Akrotiri on the island of Thera, modern day Santorini. Thera was a prominent Minoan colony and an important trade hub.[64] The route proposed by the authors of the study would be from a city named Shortugai, in modern-day Afghanistan, through Iran, and up to the city of Trebizond on Anatolia’s Black Sea coast.[65] There Minoan merchants would be waiting and goods would be exchanged. This is quite different to the previously held view of the scope of Minoan trade. It was previously thought that trade from India to Crete would have only been done with Mesopotamian peoples acting as middlemen.[66] Instead direct trade between India and Crete puts into perspective the scale of Minoan trade influence and connections. Knowing this, other proposed theories like Crete receiving its tin from Britain become more probable. No concrete evidence has been found of this though and the source of tin for the Minoans is still unidentified.

            To sustain such a vast trade empire the Minoans needed a capable fleet of ships to transport their goods as well as a naval fleet to protect these goods from pirates. Thucydides actually credits the Minoans with creating the first ever naval fleet, writing

The earliest ruler known to have possessed a fleet was Minos. He made himself master of the Greek waters and subjugated the Cyclades by expelling the Carians and establishing his sons in control of the new settlements founded in their place; and naturally, for the safer conveyance of his revenues, he did all he could to suppress piracy.[67]

The veracity of this claim is hard to prove and it should be noted that Thucydides was writing roughly 1000 years after the Minoans were conquered by the Mycenaeans. Despite this, it does give a good idea of how Greeks thought of the Minoans even long after they were gone. From the quote some general truths can be garnered, the Minoans controlled the Cyclades and had a strong fleet to suppress piracy which gives credence to the Minoans being a thalassocracy. Island Empires always prioritize constructing a large naval force to protect their home island and overseas colonies. The early 20th century Japanese and the British Empire are good examples of this. In this regard the Minoans were no different. The exact scale of the Minoan navy is the real mystery that can only be solved if more archeological evidence comes to light.

Even though archeologists do not have many examples of Minoan ships outside of paintings, a very small number of confirmed Minoan shipwrecks have been found. The first of which was discovered by Greek archeologist Elpida Hadjidaki in 2004. The wreck was found on the seafloor off the coast of the island of Pseira.[68] The fact that this was only discovered so recently really shows how Minoan history is very much still being written. In 1976 Jacques Cousteau discovered some Minoan pottery in the shallows of the island to add to that Pseira was also known as a Bronze Age sea port.[69] Even though this seems like a prime location for a Minoan shipwreck to be located, the deeper waters surrounding the island were never explored until Hadjidaki’s team did a dive there. On the seafloor 209 ceramic amphoras were discovered, 80 of which were completely or nearly completely intact.[70] The layout of how they were discovered also provides significant information about what the original dimensions of the ship were. Hadjidaki estimates the ship to be 32 to 50 feet long.[71] This is consistent with iconography from Minoan frescoes of what a smaller Minoan ship should look like. Hadjidaki also suggested that this ship is most likely a local ship that did not do long distance journeys to procure overseas goods.[72] It makes sense the Minoans would have many classes of ships some larger for longer expeditions and others smaller to acquire local goods. Yet, the sheer amount of amphoras found on one singular ship gives an idea on how impressive the scale of Minoan trade was. The fact that this was a small local ship must be emphasized as their large ships could have carried possibly thousands of amphora most likely carrying olive oil and wine.

Many depictions of Minoan ships can be found on sealstones on Crete. Many of these vessels have only a single mast. Arthur Evans in his article The Early Nilotic, Libyan and Egyptian Relations with Minoan Crete suggests that a small Minoan ship with a crew of less than 12 could have traveled to Benghazi in Libya or Alexandria in Egypt easily.[73] He even claims that it is very possible that because of the favorable winds, current, and extensive Cretan forests providing good quality wood, the Minoans might have been the first people to traverse the open Mediterranean.[74] This would align with the claim made by Thucydides and gives even more of an explanation on why the Minoans became a thalassocracy. The extreme deforestation of the Island of Crete can be explained by Minoans using the island’s once extensive forests to build ships.

            These ships must have been stopping at Minoan colonies along their voyages and the name of the Minoans themselves may lend a hand in finding out the extent of these colonies in the Mediterranean. Many bronze age port cities throughout the Mediterranean bear the name “Minoa”. These cities reach as far west as Sicily and are scattered throughout the Aegean and eastern Mediterranean. Minoans got their name in the early 20th century being named by historians after the legendary King Minos from Greek stories. Although this is the case, Castleden argues that it is very possible that Minos was the title of the Minoan king and the colonies were named after him.[75] It could make sense as an etymological remnant of Minoan rule. The location of these cities being coastal, having distinctly Minoan street plans, Minoan style of architecture, Minoan burial customs, and pottery shops in the Minoan style, all point towards these “Minoa” as being Minoan colonies.[76]

To clarify, not all Minoan colonies held the name Minoa. Instead, there is a significant list of other settlements that share all the characteristics of Minoan colonies. Kastri on the island of Kythera is theorized to have been the first Minoan colony with Minoan settlement dating back to before 2000 B.C.E.[77] Kastri was first excavated in the early 1960s and was determined to be a Minoan colony from the heavy presence of Minoan pottery and evidence of Cretan cultural practices. Another thing to note is the presence of what seems to be pottery belonging to a native population of the island.[78] By dating the pottery and looking at expansion of the settlement it can be seen that this native pottery style was slowly overtaken and eventually completely replaced by Minoan styles as the centuries went by.[79] This probably indicates either the expulsion of the native people of Kythera or their assimilation into Minoan society. The original excavation in the 1960s only uncovered a small amount of the total island while more recent excavations have been able to unearth much more land area.

Through these newer excavations Minoan presence on the island seems to have extended beyond just Kastri.[80] Though the question of whether the native population was pushed out or integrated into Minoan society has not been fully answered it does allow for some insight into Minoan colonial practices. It is clear that Minoans were not adverse to settling in areas where native populations were already residing. The Minoans likely colonized Kythera in order to have a rest stop for ships and to monopolize on trading routes coming through the west of the Aegean. Another reason for their settlement would surely be to extract any and all material resources that the island had.[81] It also leaves the possibility that Minoans incorporated other cultures into their own and at the apex of their expansion they had multiple ethnic peoples in their domain.

While Kastri might have been the first Minoan colony, perhaps the most discussed and important to understanding Minoan colonies is Akrotiri on Thera. Essentially the Minoan equivalent of Pompeii, a volcanic eruption buried the city in ash in the 16th century B.C.E.[82] This left the city relatively well preserved. Three large vessels found at Akrotiri contained wine and olive oil residues.[83] The storeroom they were found in also featured large windows and archeologists think that this could have been used as a storefront.[84] It makes sense that Akrotiri had such stores as it would have been a pivotal stop for ships travelling through the eastern Mediterranean and even for ships going to and from the Black Sea. The Minoan civilization’s emphasis on trade is particularly noticeable when looking at their colonies. Their colonies always tend to be on the coast and in places that are on busy trade routes. They also tend to colonize places where it would be to rest on long voyages or to wait for favorable winds for their ships.

            As seen with Kastri, Akrotiri was not an uninhabited island when Minoans arrived. Likewise with Kastri, local pottery styles seemed to become more Minoanized over time.[85] It seems Minoan colonies did not always rely on many colonists travelling from Crete to settle in these far away cities. Instead what probably occurred was artisans were sent from the palace-temples to teach the Cretan way of producing pottery, making frescos, etc.[86] In exchange some kind of agreement would be reached to bring the cities closer to the Minoans politically. Over time the city becomes “culturally colonized” without the need for conquest or resettlement of native peoples.[87] Evidence from Akrotiri gives more credence to the theory that the people of Kastri were assimilated into Minoan society without being expelled to an unknown location or killed. Evidence like Theran cultural and artistic expression still being present in their pottery and frescos combined with strong Minoan influences.[88] It seems that as long as you were capable of providing the Minoans with artisan goods and were located in a coastal area along trade routes they were eager to integrate you into the broader Minoan trade empire.

            Minoans established colonies not just on islands but colonies like the one at Miletus in modern day Turkey show that they would establish colonies on the continent as well. Ninety-five percent of the pottery found at Miletus has been made in the Minoan style or was imported from Crete.[89] On top of that, seven inscriptions in Minoan Linear A script have been uncovered. Miletus is not one of a kind and Minoan frescoes and pottery have been found at Iasos, Turkey and Qatna, Syria which could also be potential Minoan colonies.[90] Iasos is the more likely location of a Minoan colony compared to Qatna as it is on the Aegean and Minoan colonies seem to always be close by to water. These colonies likely were used to produce grain and mine for copper as well as other metals that the Minoans lacked on Crete. As the Minoans had a mostly export based economy they would have been trying to cut down on importing food and copper as much as possible.

            Castleden provides his reasoning for why colonies were established which again goes back to a lack of resources simply saying they were established as a response to a surge in population on Crete that necessitated having to look overseas for new supplies of grain and other food sources.[91] It seems that local populations integrated with the Minoan culture eventually but the question of how Minoan colonies were initially founded is still a mystery. It is possible that the Minoans were able to peacefully establish colonies. Minoan art and culture was much more advanced than their close neighbors and it can be theorized that gaining access to some of this Cretan knowledge could have convinced local peoples to allow Minoan settlements on their islands.[92] Despite this possibility it is a bit optimistic and it should be remembered that during this time period violence was often used and it would not be out of place for the Minoans to utilize it as well. Regardless of how the colonies were formed, the Minoan culture was spread and colonies were established using Minoan cities as reference.

Through this the Minoan Empire only further expanded their trade dominance and influence on Mediterranean culture.Minoan influence in the Mediterranean has been greatly diminished by historians for decades. It is now clear that Minoans turned to trade due to lack of natural resources. By concentrating their artisans together and creating specialized government run workshops Minoans were able to use raw materials to create elaborate products that were works of art. These products created a high demand for Minoan goods which allowed the Minoans to become very wealthy building large palaces and establishing colonies all over the Mediterranean. They also dealt not just in material goods but cultural goods as well. As with all trade this was not a one-way exchange. The Minoans took inspiration from the best and oldest cultures at the time, like the Egyptians, while spreading their own culture simultaneously. Where they established colonies they also spread their culture and it is possible many different peoples considered themselves Minoan by the time the empire fell. Minoan ships were able to carry hundreds or even thousands of amphora long distances and Crete alone produced tens of thousands of litres of olive oil and wine a harvest. The quality and quantity of Minoan industry was clearly an accomplishment to marvel at. By the way their contemporaries and the Greeks wrote about them, it becomes clear that the Minoans carried some level of respect and influence that should grant them more than a footnote in our history books. It is clear the Minoans controlled an impressive Bronze Age thalassocracy that spread its products just as far as its culture and left an indelible mark on Mediterranean civilization.

Barrett, Caitlín E. “The Perceived Value of Minoan and Minoanizing Pottery in Egypt.” Journal of Mediterranean Archaeology 22, no. 2 (2010): 211–34. https://doi.org/10.1558/jmea.v22i2.211.

Bonn-Muller, Eti. “First Minoan Shipwreck. Archaeology.” Vol. 63. Boston: Archaeological Institute of America, 2010.

Broodbank, Cyprian, and Evangelia Kiriatzi. “The First ‘Minoans’ of Kythera Revisited: Technology, Demography, and Landscape in the Prepalatial Aegean.” American Journal of Archaeology 111, no. 2 (2007): 241–74. http://www.jstor.org/stable/40037274.

Buck, Robert J. “The Minoan Thalassocracy Re-Examined.” Historia : Zeitschrift Für Alte Geschichte 11, no. 2 (1962): 129–37. https://www.jstor.org/stable/4434736.

Castleden, Rodney. Minoans : Life in Bronze Age Crete. London ; Routledge, 1993.

Cole, Sara “The Wall Paintings of Tell el-Dab’a: Potential Aegean Connections,” Pursuit – The Journal of Undergraduate Research at The University of Tennessee: Vol. 1 : Iss. 1 , Article 10 (2010): https://trace.tennessee.edu/cgi/viewcontent.cgi?article=1006&context=pursuit  

Demakopoulou, K, and S Aulsebrook. “The Gold and Silver Vessels and Other Precious Finds from the Tholos Tomb at Kokla in the Argolid.” Annual of the British School at Athens 113 (2018): 119–42. https://doi.org/10.1017/S0068245418000084.

Evans, Arthur. “The Early Nilotic, Libyan and Egyptian Relations with Minoan Crete.” The Journal of the Royal Anthropological Institute of Great Britain and Ireland 55 (1925): 199–228. https://doi.org/10.2307/2843640.

Graham, J W. “Further Notes on Minoan Palace Architecture: I. West Magazines and Upper Halls at Knossos and Mallia; 2. Access to, and Use of, Minoan Palace Roofs.” American Journal of Archaeology 83 (1979): 49–69.

Keys, David. “Colonizing Cretans.” Archaeology. Vol. 57. Boston: Archaeological Institute of America, 2004.

Knappett, Carl, and Irene Nikolakopoulou. “Colonialism without Colonies? A Bronze Age Case Study from Akrotiri, Thera.” Hesperia 77, no. 1 (2008): 1–42. https://doi.org/10.2972/hesp.77.1.1.

Marom, Nimrod, Assaf Yasur-Landau, and Eric H Cline. “The Silent Coast: Zooarchaeological Evidence to the Development Trajectory of a Second Millennium Palace at Tel Kabri.” Journal of Anthropological Archaeology 39 (2015): 181–92. https://doi.org/10.1016/j.jaa.2015.04.002.

Reich, John J. “Twelve New Bronze and Iron Age Seals.” The Journal of Hellenic Studies 86 (1966): 159–65. https://doi.org/10.2307/629000.

Revesz, Peter Zsolt and Bipin C Desai. “Data Science Applied to Discover Ancient Minoan-Indus Valley Trade Routes Implied by Common Weight Measures.” In Proceedings of the 26th International Database Engineered Applications Symposium, 150–55. New York, NY, USA: ACM, 2022. https://doi.org/10.1145/3548785.3548804.

Starr, Chester G. “The Myth of the Minoan Thalassocracy.” Historia : Zeitschrift Für Alte Geschichte 3, no. 3 (1955): 282–91. https://www.jstor.org/stable/4434736.

Thera Excavation Storerooms (Greek repository, Akrotiri, contemporary). Three Vessels in the Storage Room of Sector A. at the Akrotiri Excavation Site. 1613. Masonry; construction (discipline); archaeology; experimental archaeology; classical archaeology; urban archaeology; ceramics (objects). Akrotiri Archaeological Site. https://jstor.org/stable/community.31068453

 Three Handled Amphora: Marine Style Octopus. c.1500-1400 B.C.E. https://jstor.org/stable/community.13555696

Tite, M.S, Y Maniatis, D Kavoussanaki, M Panagiotaki, A.J Shortland, and S.F Kirk. “Colour in Minoan Faience.” Journal of Archaeological Science 36, no. 2 (2009): 370–378. https://doi.org/10.1016/j.jas.2008.09.031.

Urbani, Bernardo, and Dionisios Youlatos. “A New Look at the Minoan ‘Blue’ Monkeys.” Antiquity 94, no. 374 (2020): e9. https://doi.org/10.15184/aqy.2020.29.

Vessel (Jug; Ht. 27cm.). ca. 1800 B.C.E. Terra cotta. Heraklion: Mus., Archaeological.; Found at Pazarli. https://jstor.org/stable/community.11656751

Vessel (Pithos; Ht. 45cm.). ca. 1800 B.C.E. Terra cotta. Heraklion: Mus., Archaeological.; From Dascylion, ancient capital of the satrapy of Phrygia during the Achaemenid period. https://jstor.org/stable/community.11656755

Yahalom-Mack, N, D.M Finn, Y Erel, O Tirosh, E Galili, and A Yasur-Landau. “Incised Late Bronze Age Lead Ingots from the Southern Anchorage of Caesarea.” Journal of Archaeological Science, Reports 41 (2022): 1-10. https://doi.org/10.1016/j.jasrep.2021.103321


[1] Rodney Castleden, Minoans: Life in Bronze Age Crete, (Routledge, 1993), 4.

[2] Castleden, Minoans, 3.

[3] Cyprian Broodbank and Evangelia Kiriatzi, “The First ‘Minoans’ of Kythera Revisited: Technology, Demography, and Landscape in the Prepalatial Aegean,” American Journal of Archaeology 111, no. 2 (2007): 241–74, http://www.jstor.org/stable/40037274, 241.

[4] Chester G. Starr, “The Myth of the Minoan Thalassocracy,”Historia : Zeitschrift Für Alte Geschichte 3, no. 3 (1955): 282–91, https://www.jstor.org/stable/4434736, 283.

[5] Starr, “The Myth of the Minoan Thalassocracy,” 284.

[6] Starr, “The Myth of the Minoan Thalassocracy,” 284.

[7] Starr, “The Myth of the Minoan Thalassocracy,” 285.

[8] Robert J. Buck, “The Minoan Thalassocracy Re-Examined,” Historia : Zeitschrift Für Alte Geschichte 11, no. 2 (1962): 129–37, https://www.jstor.org/stable/4434736, 131.

[9] Buck, “The Minoan Thalassocracy Re-Examined,” 131.

[10] Castleden, Minoans, 3.

[11] Castleden, Minoans, 45.

[12] Castleden, Minoans, 46.

[13] Castleden, Minoans, 46.

[14] Castleden, Minoans, 113.

[15] Katarzyna Zeman-Wiśniewska, “Re-Evaluation of Contacts between Cyprus and Crete from the Bronze Age to the Early Iron Age,”Electrum (Uniwersytet Jagielloński. Instytut Historii) 27, no. 27 (2020): 11–32, https://doi.org/10.4467/20800909EL.20.001.12791, 26.

[16] Zeman-Wiśniewska, “Re-Evaluation of Contacts between Cyprus and Crete,” 26.

[17] N. Yahalom-Mack et al, “Incised Late Bronze Age Lead Ingots from the Southern Anchorage of Caesarea,” Journal of Archaeological Science, Reports 41 (2022): 1-10, https://doi.org/10.1016/j.jasrep.2021.103321, 1-2.

[18] Yahalom-Mack et al, “Incised Late Bronze Age Lead Ingots from the Southern Anchorage of Caesarea,” 3.

[19] Castleden, Minoans, 63.

[20] Castleden, Minoans, 40.

[21] Castleden, Minoans, 40.

[22] Castleden, Minoans, 77.

[23] Castleden, Minoans, 78.

[24] Castleden, Minoans, 77.

[25] Castleden, Minoans, 108.

[26] Castleden, Minoans, 109.

[27] Nimrod Marom, Assaf Yasur-Landau, and Eric H Cline, “The Silent Coast: Zooarchaeological Evidence to the Development Trajectory of a Second Millennium Palace at Tel Kabri,” Journal of Anthropological Archaeology 39 (2015): 181–92, https://doi.org/10.1016/j.jaa.2015.04.002, 182.

[28] Marom, Yasur-Landau, and Cline, “The Silent Coast,” 190.

[29] Caitlín E. Barrett, “The Perceived Value of Minoan and Minoanizing Pottery in Egypt,”Journal of Mediterranean Archaeology 22, no. 2 (2010): 211–34, https://doi.org/10.1558/jmea.v22i2.211, 226,211. 226.

[30] Barrett, “The Perceived Value of Minoan and Minoanizing Pottery in Egypt, 226.

[31] Castleden, Minoans, 119.

[32] Castleden, Minoans, 12.

[33] Castleden, Minoans, 12.

[34] Bernando Urbani and Dionisios Youlatos, “A New Look at the Minoan ‘Blue’ Monkeys,” Antiquity 94, no. 374 (2020): e9, https://doi.org/10.15184/aqy.2020.29.

[35] Castleden, Minoans, 12.

[36] Castleden, Minoans, 12.

[37] Castleden, Minoans, 119.

[38] Castleden, Minoans, 119.

[39] Sara Cole, “The Wall Paintings of Tell el-Dab’a: Potential Aegean Connections,”  Pursuit – The Journal of Undergraduate Research at The University of Tennessee: Vol. 1 : Iss. 1 , Article 10 (2010): 112, https://trace.tennessee.edu/cgi/viewcontent.cgi?article=1006&context=pursuit.

[40] Cole, “The Wall Paintings of Tell el-Dab’a”, 112.

[41] Cole, “The Wall Paintings of Tell el-Dab’a”, 112.

[42] Cole, “The Wall Paintings of Tell el-Dab’a”, 112.

[43] M.S Tite et al, “Colour in Minoan Faience,” Journal of Archaeological Science 36, no. 2 (2009): 370, https://doi.org/10.1016/j.jas.2008.09.031.

[44] Tite et al, “Colour in Minoan Faience,” 370.

[45] Castleden, Minoans, 95.

[46] Castleden, Minoans, 88.

[47] Castleden, Minoans, 89.

[48] Castleden, Minoans, 90.

[49] J W Graham, “Further Notes on Minoan Palace Architecture: I. West Magazines and Upper Halls at Knossos and Mallia; 2. Access to, and Use of, Minoan Palace Roofs,” American Journal of Archaeology 83 (1979): 49–69, 49.

[50] Graham, “Further Notes on Minoan Palace Architecture,” 49.

[51] Castleden, Minoans, 90.

[52] Castleden, Minoans, 93.

[53] K. Demakopoulou and S. Aulsebrook, “The Gold and Silver Vessels and Other Precious Finds from the Tholos Tomb at Kokla in the Argolid,”Annual of the British School at Athens 113 (2018): 119–42, https://doi.org/10.1017/S0068245418000084.

[54] Demakopoulou and Aulsebrook, “The Gold and Silver Vessels and Other Precious Finds”.

[55] Castleden, “Minoans,” 95.

[56] John J Reich, “Twelve New Bronze and Iron Age Seals,”The Journal of Hellenic Studies 86 (1966): 159–65, https://doi.org/10.2307/629000.

[57] Reich, “Twelve New Bronze and Iron Age Seals”, 159.

[58] Castleden, Minoans, 118.

[59]Vessel (Pithos; Ht. 45cm.), ca. 1800 B.C, Terra cotta, Heraklion: Mus., Archaeological; From Dascylion, ancient capital of the satrapy of Phrygia during the Achaemenid period, https://jstor.org/stable/community.11656755.

[60] Vessel (Pithos; Ht. 45cm.), ca. 1800 B.C, Terra cotta, Heraklion: Mus.

[61] Vessel (Jug; Ht. 27cm.). ca. 1800 B.C.E. Terra cotta. Heraklion: Mus., Archaeological.; Found at Pazarli. https://jstor.org/stable/community.11656751.

[62]Three Handled Amphora: Marine Style Octopus, c.1500-1400 B.C, https://jstor.org/stable/community.13555696.

[63] Peter Zsolt Revesz and Bipin C. Desai, “Data Science Applied to Discover Ancient Minoan-Indus Valley Trade Routes Implied by Common Weight Measures,” Proceedings of the 26th International Database Engineered Applications, (New York: 2022), 150, https://doi.org/10.1145/3548785.3548804.

[64] Revesz and Desai, “Data Science Applied to Discover Ancient Minoan-Indus Valley Trade Routes”, 152.

[65] Revesz and Desai, “Data Science Applied to Discover Ancient Minoan-Indus Valley Trade Routes”, 152.

[66] Revesz and Desai, “Data Science Applied to Discover Ancient Minoan-Indus Valley Trade Routes”, 152.

[67] Castleden, Minoans, 116.

[68] Eti Bonn-Muller,“First Minoan Shipwreck,” Archaeology, Vol. 63, Boston: Archaeological Institute of America, 2010.

[69] Bonn-Muller, “First Minoan Shipwreck”.

[70] Bonn-Muller, “First Minoan Shipwreck”.

[71] Bonn-Muller, “First Minoan Shipwreck”.

[72] Bonn-Muller, “First Minoan Shipwreck”.

[73] Arthur Evans, “The Early Nilotic, Libyan and Egyptian Relations with Minoan Crete,” The Journal of the Royal Anthropological Institute of Great Britain and Ireland 55 (1925): 199–228, https://doi.org/10.2307/2843640, 207.

[74] Evans, “The Early Nilotic, Libyan and Egyptian Relations with Minoan Crete,” 208.

[75] Castleden, Minoans, 117.

[76] Castleden, Minoans, 117.

[77] Castleden, Minoans, 117.

[78] Broodbank and Kiriatzi, “The First ‘Minoans’ of Kythera Revisited,” 241.

[79] Broodbank and Kiriatzi, “The First ‘Minoans’ of Kythera Revisited,” 242.

[80] Broodbank and Kiriatzi, “The First ‘Minoans’ of Kythera Revisited,” 259.

[81] Broodbank and Kiriatzi, “The First ‘Minoans’ of Kythera Revisited,” 267.

[82] Thera Excavation Storerooms (Greek repository, Akrotiri, contemporary),Three Vessels in the Storage Room of Sector A. at the Akrotiri Excavation Site, https://jstor.org/stable/community.31068453.

[83] Thera Excavation Storerooms, Three Vessels in the Storage Room of Sector A. at the Akrotiri Excavation Site.

[84] Thera Excavation Storerooms, Three Vessels in the Storage Room of Sector A. at the Akrotiri Excavation Site.

[85] Carl Knappett and Irene Nikolakopoulou, “Colonialism without Colonies? A Bronze Age Case Study from Akrotiri, Thera,” Hesperia 77, no. 1 (2008): 1–42, https://doi.org/10.2972/hesp.77.1.1, 37.

[86] Knappett and Nikolakopoulou, “Colonialism without Colonies?,” 38.

[87] Knappett and Nikolakopoulou, “Colonialism without Colonies?,” 38.

[88] Knappett and Nikolakopoulou, “Colonialism without Colonies?,” 36.

[89] David Keys, “Colonizing Cretans,” Archaeology, Vol. 57, Boston: Archaeological Institute of America, 2004.

[90] David Keys, “Colonizing Cretans”.

[91] Castleden, Minoans, 121.

[92] Castleden, Minoans, 121.