Timeline of Eugenics “Natural” Disasters
I have been working on a major timeline to look for major causes of death via “natural” causes like earthquakes,tsunamis fires, floods, dams which are Man-Made, all natural disasters are actually created by them to harm us.
Speaking of “natural disasters, watch Three Gorges Dam. This flooding is on purpose.
Record-breaking rainfall leads to destructive flooding in Beijing.
ALL of this flooding now in China is not good for the already shaky dam situation, KEEP YOUR EYES on Three Gorges Dam and Yahtzee River. That could take out over 40% of the population.
The Three Gorges dam project has been controversial at every stage of conception and execution, from concerns about its environmental impact, to allegations that it was unnecessarily large, and merely a poorly-planned vanity project.
Opponents of the project denounced the displacement of huge numbers of local residents (1.9 million) by the lengthy construction project and flooding as a human rights violation, and the project has been plagued with problems since is completion in 2003.
Problems Caused by Three Gorges Dam
Three Gorges Dam Project, the world’s largest hydropower project brings many benefits to Chinese people. However, many problems also accompany the project from the proposal to the present day. These Three Gorges Dam problems mainly include:
More Sediment in Upper Riverbeds and Three Gorges Reservoir
The increasing sediment is one of the main Three Gorges Dam disasters. After the Three Gorges Dam was constructed and the water level of upper reaches was raised, the upper river flows more slowly than before, and the sand and rocks cannot be rushed down in time and are piled up in the riverbeds and reservoir. As time passes by, the upper stream riverbeds will be raised higher and higher, which leads to flooding more easily.
Threat to Downstream Riverbanks and Yangtze River Delta
Due to the sedimentation in the upper stream, the water becomes purer and erodes the downstream riverbanks more easily. In such a situation, the bank’s collapse is easier, which is a great danger to people and the country, especially when there is a flood. In addition, less sediments make the land of Yangtze Estuary shrink and seawater pours backward to the coast. In the past, Shanghai in the Yangtze estuary stretched 40 meters (44 yards) a year averagely to the sea; now the city faces the threat from ocean instead.
Increased Geological Disasters.
As the construction of Three Gorges Dam forced the surrounding geological environment to change, geological disasters occur more frequently in the reservoir area. Landslides, debris flow and earthquakes are common occurrences.
In 1992 Premier Li Peng, who had himself trained as an engineer, was finally able to persuade the National People’s Congress to ratify the decision to build the dam, though almost a third of its members abstained or voted against the project—an unprecedented sign of resistance from a normally acquiescent party body—this indicates just how bad an idea it was considered at the time.
It was widely agreed that several smaller dams would generate the same levels of electricity, but with a far less impact, but the CCP was insistent on showing the world they had the power to ‘tame’ the mighty Yangtze river.
The endemic corruption under the CCP was rife within the project, and it came at the cost of dangerously reduced quality of materials and adherence to minimum levels of structural integrity, and little or no oversight or quality control.
The World Bank refused to advance China funds to help with the project, citing major environmental and other concerns, and President Jiang Zemin did not accompany President Li Peng to the official inauguration of the dam in 1994.
The Spectator recalls why a collapse is possible;
It is plausible because during construction there were nearly a hundred reported instances of corruption, bribery and embezzlement, including 16 cases directly related to construction. The dam’s principal sponsor, former Premier Li Peng, used his position to appoint relatives to senior positions in the construction company. On completion and, with several hundred thousand forcibly relocated inhabitants denied their full resettlement entitlements, Li’s family ended up controlling 15 per cent of China’s power generation industry.
THE SPECTATOR, AUG 1
The Chinese Communist Party’s prestige project was hugely costly – officially it is expected to have cost $22.5 billion USD,but unofficial estimates say it could have cost at least three times that amount.
The hydro-power gravity dam began construction in 1994 and was completed in 2003. Spanning 2.33 km across the Yangtze River in southwestern China, the project required 129 nearby cities to be flooded, and almost 2 million people to be relocated. Displaced residents saw no more than 10% of the ‘compensation fund’ allocated by the CCP, as most of it was kept by regional directors and party officials.
Over 1,000 archaeological and historic sites were submerged by the dam, and adding to the toxic soup of a reservoir – 1600 drowned mines and factories.
The environmental impacts of the Three Gorges Dam has been massive to say the least, from the weight of the water in the reservoir causing earthquakes and the very bedrock the dam sits on to shift, to the high water-levels in the area causing huge landslides and soil erosion into the Yangtze river, resulting in the mud and silt collecting at the base of the Dam, as well as the fact that’s estimated that the concentration of weight from the water and concrete has slowed the Earth’s rotation down by 0.06 microseconds per day.
Other issues are significant deforestation of the area, and poor or incapable dam waste management – the dam is constantly choked with garbage, both on the surface, and at it’s base. The dam is so massive that it has created a micro-climate that has threatened the ecosystem of the region.
Not to mention, 265 million gallons of raw sewage is deposited in the Yangtze River each year. The reservoir causes it to collect and results in toxic algae growth.
Scrutiny of the dam intensified recently when it was noticed that there appeared to be significant changes in the dam when seen from satellite imagery. The CCP admitted that there had been some movement in the dam, but it was well within acceptable limits, and claimed the warping in the dam as seen on Google earth was more an issue with the way the imagery was ‘stitched’ together.
A new dam has since been inserted over the original satellite imagery on Google Earth. Classic CCP.
On July 8, 2020, amidst the current 100-year rainfall levels, the dam’s state-owned operator addressed concerns about the dam’s structural integrity in a statement, saying that the dam was safe. But since July 5, the company has closed off the dam from visiting tourists and all media foreign and domestic, making Chinese netizens nervous.
Another factor that made seasoned China watchers nervous is that state-run media Global Times suddenly reported; Three Gorges Dam ‘not at risk of collapse,’ safe for heavy rainfall: experts. It’s a reliable rule of thumb that whatever the Global Times reports, it’s safe to assume the reality is the complete opposite.
Fan Xiao, a Chinese geologist and long-standing critic of giant dam projects, said the rumors reflected the lack of debate about the Three Gorges project, which was now considered a “national treasure” that should not be criticized. On June 23rd, Fan told The Epoch Times his superiors had instructed him earlier in the day “not to take any foreign media interviews.”
“If talking about problems is stigmatized, then it is nothing more than putting one’s head in the sand and deceiving oneself,” Fan posted on his WeChat account on Monday. “It will solve no problems and could make them worse.”
The dam can only temporarily intercept flooding upstream, but it can do nothing to the flooding from heavy rains in the middle and lower reaches of the Yangtze“
FAN XIAO – THE ASIA TIMES.
Dr Wang Weiluo, a German-Chinese hydraulic engineer, has studied the Three Gorges Dam for decades. In a July 8 analysis published (requires translation) on Yibao, a Chinese website for current affairs commentary, Wang predicted that based on the current satellite images, the dam would not last longer than 50 years, which means it only has roughly 30 more years left.
There are four relatively large earthquake faults in Hubei Province, and three of them are located in proximity to the dam.
“Although these faults are short and small, ranging from dozens of miles to approximately 100 miles, being so close to the Three Gorges Dam makes them a serious threat to the safety of the dam”.
DR WANG WEILUO – THE EPOCH TIMES
Dr Wang said that the Three Gorges Project has not been checked and certified since its commissioning in 2003, as no one dares to guarantee its quality or safety.
Recently authorities have opened all water discharge gates amid heavy rainfall in China, and several several issues with the dam are now coming to light; there were several engineering errors and design problems in the dam itself when it was built, among these are that its hydraulic slope is calculated incorrectly, which could bring disaster in itself, and that its flood control benefits are now shown to have been greatly exaggerated.
The CCP’s official narratives of the dam’s abilities have been walked back; in 2003, an article headline from state-media Xinhua said the dam could withstand a once-in-a-10,000-year flood. The wording was changed to 1,000 years in 2007, then 100 years in 2008; and in 2010, a TV anchor at state broadcaster CCTV cited the Changjiang Water Resources Commission, which has direct oversight over the Yangtze River Basin, saying that people “cannot place all their hopes on the Three Gorges Dam.”
In an interview with The Epoch Times, Dr Wang Weiluo pointed out that many of the dam’s goals can’t be achieved simultaneously as the CCP attempted.
Dr Wang noted that power generation conflicts with flood control, shipping conflicts with flood control, mud storage and clearing conflict, also power generation and flood control, and shipping and irrigation conflict, power generation and mud drainage and shipping also conflict, and so basically Dr Wang is saying the CCP tried making the dam do just about everything, and because they tried to make it do everything its different functions ended up undermining the other functions, which overall undermines the basic functions and integrity of the dam.
Dr Wang Weiluo also noted;
It was demonstrated that the advantage of the Three Gorges Dam was that it can effectively control the downstream water volume. The current situation is just the other way around; when the downstream is dry the dam needs to store water, when the downstream is flooded the three gorges dam needs to discharge flood waters.
DR WANG WEILUO
Questions around Construction Quality
There are also quality issues with the dam which are becoming more apparent. The CCP strictly controls information on the quality of the dam, and so most of the data on the dam has come from the CCP itself, yet this hasn’t been enough to silence researchers.
Among the issues are that several sections of the dam show issues of horizontal displacement, there’s cracking in multiple areas which has been attributed to the dam being constructed and inspected as multiple separate projects rather than a single whole with a standardized quality. In other words, the dam was built in sections, and each section was treated as a separate project.
Recently cracks have been showing which apparently are among these sections, also holes that were drilled in 2002 at key parts of its 800 ton gate were never properly filled. Instead of drilling then recasting the holes, authorities only applied layer of chemical coating that is supposed to prevent leakage. On top of this there were issues with the high slope of the ship lock where cracks began appearing as far back as 2002, which could also affect the opening and closing of these ship lock gates.
These are just a few of the details which were explained in depth in a series of articles from The Epoch Times published in Chinese.
The Three Gorges Dam is now coming under the international spotlight at the moment, as authorities have repeatedly opened the floodgates amid the heavy rains and floods, causing observers to ask why.
The CCP has been given a choice in this; the dam itself does not appear to be able to withstand the floods, and so they had to choose between opening the gates and flooding different cities downstream, or letting the dam collapse and flooding them all.
In addition, reports are that that the CCP didn’t warn many of the downstream towns and cities before it opened these floodgates, and many people’s homes were flooded in the middle of the night.
Locals know the area well, and know there wasn’t sufficient rainfall overnight to cause such a dramatic rise in the water level, so it’s clear the CCP are regularly releasing massive amounts of water from the dam in the middle of the night under the cover of darkness to relieve pressure on the structure.
There were also reports of people not knowing in advance to shut off their electricity and being killed, and of people being killed in their sleep by the floods.
In addition to this there is already serious flooding downstream from the heavy rains, and in many parts of China local authorities have been forced to destroy dams and dykes to manage the flood waters in Anhui province, sacrificing local farming provinces to minimize damage to bigger cities.
For something to happen with the dam in addition to the floods already would mean massive disaster.
“Once-in-100-year-rains” – welcome to 2020
China is currently experiencing biblical levels of rain and floods, but because China is a foe of Trump, the Western news media has largely ignored this calamity.
Due to heavy rainfall on July 26th, the water flow in the upper reaches of the Yangtze river, such as Min Jiang river, Jialing river, and the Three Gorges area increased significantly, and the inflow rate in the Three Gorges reservoir rose rapidly to 50,000 cubic meters per second.
As a result the bureau of hydrology of the Chongqing water resources commission officially announced that the Yangtze river flood, number three of 2020, and the maximum flow into the three gorges reservoir was expected to be around 60,000 cubic meters per second on the evening of July 27th.
In anticipation of the new flood peak the Three Gorges Dam increased its water discharge beforehand, and on July 24th the water outflow from the dam soared to 45,600 cubic meters per second, setting a new record for this year. Netizens have estimated that the dam may have opened all nine floodgates to release flood waters before the next deluge.
The once-in-a-hundred-year rainfall is expected to only get heavier throughout the month of August, and if the dam collapses, it’s expected to do so by end of August 2020.
Chinese citizens have complained on social media that the most vicious thing was that the government secretly released the flood without informing the citizens to evacuate early because it would save the cost of compensating them for their property. By making man-made disasters seem like a natural disaster the CCP avoids culpability and having to pay compensation, and people are likely (expected) to then thank the Communist Party for any help and aid.
Not only is there concern about flooding in big cities, but also about the poor people in flood prone areas. Anhui province, located in the middle and lower reaches of the Yangtze river, was sacrificed to keep Nanjing and Wuhan city safe, causing farmers to lose entire crops at a time when China’s food security was already threatened by the CCP virus, locusts, and floods.
According to state-media Xinhua News, the State Defense Office, the ministry of emergency management, and the State Administration of Food and Material Reserves transferred 1 woven bags and woven cloth, geo-textiles, and other types of flood control supplies to Anhui province to support the flood rescue work, but provides not a penny of relief funds and necessities for the people.
According to official data from China’s ministry of emergency management; from June 1st to July 22nd the floods caused over 4.5 million people in Jiangxi, Anhui, Hubei, Hunan, Guangxi, Guangzhou, Guangdong, Chongqing, Sichuan, and another 27 provinces to be affected, a reported 142 people killed or missing, 35,000 houses collapsed, and incurred a direct economic losses of 116.05 billion Yuan, or $16.6 billion USD.
As of August 8th, the water level of the Yangtze River continues to rise.
In the eyes of CCP officials the people in the flood affected areas can just be sacrificed at any time – what they really care about is maintaining the stability of the regime.
China has been besieged by floods since June, has fallen out with the US and damaged its foreign relations, and the international community is reevaluating their relationship with the Chinese Communist Party.
Xi is preparing for war and not concerned with the flooding and will release as much water as possible to try and prevent the repercussions of the dam collapsing.
Just before the number three flood on July 24th, the Three Gorges Dam designer and the academic of the Chinese academy of engineering, Zheng Shouren, known in China as The Father of the Three Gorges Dam’ passed away due to illness. While the dam that he left behind remains a controversial subject, Zheng once made a well-known and worrying statement; “you can’t blame the three gorgeous project whenever you encounter extreme weather “.
Dr Wang Weiluo, the hydraulic expert, wrote that the current public outcry is not just a “general abstract blame “, but has specific reasons. Before the construction of the Three Gorges Dam, the Yangtze River had a series of lakes such as the Dongtang lake and Polyang lake for flood regulation, and when there is flooding the lakes would naturally divide the flood and prevent overflow in the mainstream.
After the flood, the floodwater entering the lakes would slowly enter the mainstream of the Yangtze River once again, so that the Yangtze River would maintain a more stable water level. This way the ecology is balanced naturally and both navigation and irrigation are guaranteed.
Now that the CCP has built a huge dam the original natural cycle has been abolished. Take Poyang lake as an example; every October is the Three Gorges Dam water storing period and is also the dry season in Jiangxi province, so the Poyang lake is in urgent need of water replenishment, however in order to generate electricity the Dam stops the water flow instead, allowing only a little bit of water to flow into the Yangtze river, which would eventually lead to a serious disaster of the lakes downstream drying up.
Recently a strange phenomenon has been captured in multiple Chinese provinces – fish have been seen jumping out of the water in large numbers. This is thought to be a warning sign of impending natural disaster, such as a massive earthquake.
Not The First Catastrophic Dam Collapse In China
In August, 1975, typhoon Nina battered China for days, dumping more than a year’s worth of rain in 24 hours. By the time night fell on Aug. 8, as many as 65 dams in the area had collapsed. The Soviet built Banqiao Dam was built to handle a once in a 1,000-year deluge, unfortunately Typhoon Nina would turn out to be a once in 2,000-year storm, bearing down with enough force to cause the world’s deadliest infrastructure failure ever.
It’s estimated that the sudden Banqiao dam collapse unleashed 600 billion liters of water, killing 85,000 people instantly, and a study conducted by eight Chinese water science experts who probably had access to censored government reports, estimated the number of total dead — from flooding and the resulting epidemics and famine — at 230,000.
The Three Gorges Dam – A Calamity of Chinese Proportions Awaits
China’s Three Gorges Dam is one of the largest ever created. Was it worth it?
Hydroelectric gravity dam that spans the Yangtze River by the town of Sandouping, in Yiling District, Yichang, Hubei province, central China, downstream of the Three Gorges.
The world’s largest power station in terms of installed capacity, the Three Gorges Dam generates an average 95±20 TWh of electricity per year, depending on the annual amount of precipitation in the river basin. After the extensive monsoon rainfalls of 2020, the dam’s annual production reached nearly 112 TWh, breaking the previous world record of ~103 TWh set by Itaipu Dam in 2016.
A Video Simulation of a Collapse is Leaked
- On July 23rd a video of a Three Gorges Dam failure simulation was widely circulated on the internet. According to the video and through analysis of existing data, if the dam was to collapse flood waters of up to 100 meters in height would be released at a speed of more than 100 kilometres per hour.
Within 30 minutes of collapse the floodwaters would destroy nearby dams directly downstream, and reach the city of Yichang, destroying it with a current speed of 70 kilometers per hour, and within five hours the water level in Yichang would reach 10 meters.
After that the flood waters will continue to flood towns along the route at a speed of 60 kilometers per hour, with flood heights of about 15 to 20 meters. When the flood waters reach the open plains, they will spread out greatly increasing the area affected. The destruction would be complete all the way to and including Shanghai.
It’s estimated the death toll could be as high as 400 million people. This would be 1,700 times higher than any recorded death toll from a man-made disaster (Banqiao Dam collapse, China, 1975).A nuclear attack on China’s five largest cities would result in less deaths than a sudden collapse of the Three Gorges Dam, and a breach would spell the CCP’s Chernobyl moment.
- The Three Gorges Dam – constructed from the highest grade of Chineseium.
- *Further reading; Murdered for Their Rivers: A Roster of Fallen Dam Fighters.
The stories come to us one at a time. A woman’s body found in a trash heap. Two protesters shot during a demonstration. A man who just stepped out for milk gunned down by masked assailants on a motorcycle
In 2015, at least 185 environmental defenders were killed, according to Global Witness. Many of them were dam fighters. At International Rivers, we’re aware of over a hundred activists who have paid the ultimate price for defending their rivers – and there are likely many more cases we’re not aware of.
The dates are in order, I am in the process of editing.
British tea company flag, looks just like USA flag – Who designs a flag to look exactly like your so-called enemy?
The flags of the Confederate States of America have a history of three successive designs during the American Civil War. The flags were known as the “Stars and Bars”, used from 1861 to 1863; the “Stainless Banner”, used from 1863 to 1865; and the “Blood-Stained Banner”, used in 1865 shortly before the Confederacy‘s dissolution. A rejected national flag design was also used as a battle flag by the Confederate Army and featured in the “Stainless Banner” and “Blood-Stained Banner” designs. Although this design was never a national flag, it is the most commonly recognized symbol of the Confederacy.
Since the end of the Civil War, private and official use of the Confederate flags, particularly the battle flag, has continued amid philosophical, political, cultural, and racial controversy in the United States. These include flags displayed in states; cities, towns and counties; schools, colleges and universities; private organizations and associations; and individuals. The battle flag was also featured in the state flags of Georgia and Mississippi, although it was removed by the former in 2003 and the latter in 2020. After the former was changed in 2001, the city of Trenton, Georgia has used a flag design nearly identical to the previous version with the battle flag.
The first flag resembling the modern stars and stripes was an unofficial flag sometimes called the “Grand Union Flag“, or “the Continental Colors.” It consisted of 13 red-and-white stripes, with the British Jack in the upper left-hand-corner. It first appeared on December 3, 1775, when Continental Navy Lieutenant John Paul Jones flew it aboard Captain Esek Hopkin’s flagship Alfred in the Delaware River. It remained the national flag until June 14, 1777. At the time of the Declaration of Independence in July 1776, the Continental Congress would not legally adopt flags with “stars, white in a blue field” for another year. The “Grand Union Flag” has historically been referred to as the first national flag of the United States. 
The Continental Navy raised the Colors as the ensign of the fledgling nation in the American War for Independence – likely with the expedient of transforming their previous British red ensign by adding white stripes. The name “Grand Union” was first applied to the Continental Colors by George Henry Preble in his 1872 book known as History of the American Flag.
The flag closely resembles the flag of the British East India Company during that era, and Sir Charles Fawcett argued in 1937 that the company flag inspired the design of the U.S. flag. Both flags could have been easily constructed by adding white stripes to a British Red Ensign, one of the three maritime flags used throughout the British Empire at the time. However, an East India Company flag could have from nine to 13 stripes and was not allowed to be flown outside the Indian Ocean. Benjamin Franklin once gave a speech endorsing the adoption of the company’s flag by the United States as their national flag. He said to George Washington, “While the field of your flag must be new in the details of its design, it need not be entirely new in its elements. There is already in use a flag, I refer to the flag of the East India Company.” This was a way of symbolizing American loyalty to the Crown as well as the United States’ aspirations to be self-governing, as was the East India Company. Some colonists also felt that the company could be a powerful ally in the American War of Independence, as they shared similar aims and grievances against the British government’s tax policies. Colonists, therefore, flew the company’s flag to endorse the company.
However, the theory that the Grand Union Flag was a direct descendant of the flag of the East India Company has been criticized as lacking written evidence. On the other hand, the resemblance is obvious, and some of the Founding Fathers of the United States were aware of the East India Company’s activities and of their free administration of India under Company rule.
In ancient Greek religion and mythology, Pan is the god of the wild, shepherds and flocks, rustic music and impromptus, and companion of the nymphs. He has the hindquarters, legs, and horns of a goat, in the same manner as a faun or satyr.
The word “pan” means “all” in Latin, Pan became a kind of universal god, a representative of pantheism. Pan was also associated with panic because he had a nasty habit of suddenly appearing out of nowhere, shouting, and frightening people away.
Today the word pan makes us think of the pandemic that suddenly appeared, making many people panic.
The term “pandemic,” from the Greek pandēmos, meaning “all people,” traces its origins along with the word “panic” to the Greek nature god, Pan. This horned and hooved creature was born to Hermes and one of the many nymphs that he loved.
Upon seeing the wild newborn, Pan’s mother fled in terror leaving Hermes to care for his half-child, half-goat son. Hermes introduced the boy to his fellow Olympians who found the curious child pleasing. Carl Kerényi explains that the common attributes associated with Pan – “dark, terror-awakening, phallic” – do not cover the range of possibilities within this god; in fact, the dark Pan may have had a twin. Pan himself may have been just one side of a more complex divine male couple.
Pull up a Chair;)
Nobel’s Path to Dynamite and Wealth
One of Nobel’s tutors was the accomplished Russian organic chemist Nikolai Zinin, who first told him about nitroglycerine, the explosive chemical in dynamite.
Though Nobel was interested in poetry and literature, his father wanted him to become an engineer, and in 1850, he sent him to Paris to study chemical engineering. Though he never obtained a degree or attended the university, Nobel worked in the Royal College of Chemistry laboratory of Professor Jules Pélouze.
It was there that Nobel was introduced to Professor Pélouze’s assistant, Italian chemist Ascanio Sobrero, who had invented nitroglycerin in 1847. Though the explosive power of the chemical was much greater than that of gunpowder, it tended to explode unpredictably when subjected to heat or pressure and could not be handled with any degree of safety. As a result, it was rarely used outside the laboratory.
The Nobel brothers (clockwise) Robert, Alfred 9 yrs of age, Ludvig and baby Emil. St. Petersburg, around 1843. Source: nobelprize.org.
His experiences with Pélouze and Sobrero in Paris inspired Nobel to look for a way to make nitroglycerin a safe and commercially usable explosive.
There is no Nobel Prize for Mathematics because Alfred Nobel, the founder of the Nobel Prizes, did not include it in his will. There are many speculations as to why he did not include mathematics as a category. One of the more credible reasons is that he simply didn’t care much for mathematics and that it was not considered a practical science from which humanity could benefit, which was a chief purpose for creating the Nobel Foundation¹⁴.
However, there are other prestigious awards for mathematicians such as the Fields Medal and the Abel Prize¹.
Alfred Nobel was a Swedish chemist, engineer, inventor, businessman, and philanthropist. He was born on October 21, 1833 in Stockholm, Sweden and died on December 10, 1896 in San Remo, Italy. He was the third son of Immanuel Nobel, an inventor and engineer, and Karolina Andriette Nobel¹.
Nobel is best known for having bequeathed his fortune to establish the Nobel Prize. However, he also made several important contributions to science during his lifetime and held 355 patents. His most famous invention was dynamite, a safer and easier means of harnessing the explosive power of nitroglycerin; it was patented in 1867.
Nobel displayed an early aptitude for science and learning, particularly in chemistry and languages; he became fluent in six languages and filed his first patent at the age of 24. He embarked on many business ventures with his family, most notably owning the company Bofors which was an iron and steel producer that he had developed into a major manufacturer of cannons and other armaments.
Let’s first mention that the popular myth that Nobel decided not to fund a prize for mathematicians because his wife was cheating on him with a mathematician (often said to be Gösta Mittag-Leffler) is (predictably) not true. In fact, it’s trivially false since Nobel was never married! Furthermore, in the correspondence between him and his lover, there is no sign of anything like an affair.
It appears that the reason there is no Nobel Prize in Mathematics is much more dull: Nobel probably simply wasn’t that interested in pure mathematics. The other categories are more natural, considering the context: Nobel was personally very much involved in physics and chemistry, so prizes for that were beyond questioning.
It was also clear to him that medicine enormously benefits mankind (this is almost the definition of medicine, when you think about it!) It appears that the peace prize was suggested by his secretary and old lover, who would go on to win the prize in 1905. Finally, the prize in literature seems to simply have come from the fact that Nobel was very interested in literature.
There is an alternative theory, however. At the time, the same Mittag-Leffler that is often accused of stealing Nobel’s wife had just recently ensured for King Oscar II to create an ‘endowment prize for various mathematicians throughout Europe’. Perhaps, this convinced Nobel that no additional prize for mathematicians was needed.
Alfred Nobel made his fortune primarily through the invention of dynamite. Upon his death in 1896, he left the bulk of his assets to an endowment to invest in “safe securities”. His will stated that the interest from this endowment should be awarded annually as prizes to those who “conferred the greatest benefit to humankind”. This is how the Nobel Prizes were established.
Nitroglycerin was later adopted as a commercially useful explosive by Alfred Nobel, who experimented with safer ways to handle the dangerous compound after his younger brother, Emil Oskar Nobel, and several factory workers were killed in an explosion at the Nobles’ armaments factory in 1864 in Heleneborg, Sweden.
Alfred Nobel‘s patent application from 1864
This business exported a liquid combination of nitroglycerin and gunpowder called “Blasting Oil”, but this was extremely unstable and difficult to handle, as evidenced in numerous catastrophes. The buildings of the Krümmel factory were destroyed twice.
In April 1866, several crates of nitroglycerin were shipped to California, three of which were destined for the Central Pacific Railroad, which planned to experiment with it as a blasting explosive to expedite the construction of the 1,659-foot-long (506 m) Summit Tunnel through the Sierra Nevada Mountains.
This led to a complete ban on the transportation of liquid nitroglycerin in California. The on-site manufacture of nitroglycerin was thus required for the remaining hard-rock drilling and blasting required for the completion of the First transcontinental railroad in North America.
When chemistry prize honored man who found use for DDT, which was later banned
The 1948 medicine prize to Swiss scientist Paul Müller honored a discovery that ended up doing both good and bad.
Mueller didn’t invent dichlorodiphenyltrichloroethane, or DDT, but he discovered that it was a powerful pesticide that could kill lots of flies, mosquitoes, and beetles in a short time.
The compound proved very effective in protecting agricultural crops and fighting insect-borne diseases like typhus and malaria. DDT saved hundreds of thousands of lives and helped eradicate malaria from southern Europe.
But in the 1960s environmentalists found that DDT was poisoning wildlife and the environment. The US banned DDT in 1972 and in 2001 it was banned by an international treaty, though exemptions are allowed for some countries fighting malaria.
When the man who invented lobotomy won the medicine prize
Carving up people’s brains may have seemed like a good idea at the time. But in hindsight, rewarding Portuguese scientist Antonio Egas Moniz in 1949 for inventing lobotomy to treat mental illness wasn’t the Nobel Prizes’ finest hour.
The method became very popular in the 1940s, and at the award ceremony it was praised as “one of the most important discoveries ever made in psychiatric therapy.”
But it had serious side effects: Some patients died and others were left severely brain damaged. Even operations that were considered successful left patients unresponsive and emotionally numb.
The method declined quickly in the 1950s as drugs to treat mental illness became widespread and it’s used very seldom today.
For more than 100 years, the Nobel Prizes have recognized the finest in human achievements, from literature and science to the Nobel Peace Prize, which is given “to the person who shall have done the most or the best work for fraternity between nations, the abolition or reduction of standing armies and for the holding and promotion of peace congresses,” according to the last will and testament of founder Alfred Nobel.
But the origins of the Nobel Prizes, and the life of Alfred Nobel, tell a very different story, one tainted by the deaths of untold thousands of people.
Alfred Bernhard Nobel was born in 1833 in Stockholm, Sweden. His father, Immanuel Nobel, was an inventor and engineer who struggled financially for much of his life.
Forced to declare bankruptcy, Immanuel left Sweden and began working in St. Petersburg, Russia, where he impressed the czar in the one of his inventions — submerged explosive mines that could thwart a naval invasion
Finally achieving a measure of success, Immanuel brought his wife and eight children to St. Petersburg. His sons were given a formal education, and Alfred shined under strict Russian tutelage, mastering several languages as well as chemistry, physics, poetry and natural sciences.
Because the elder Nobel disapproved of Alfred’s interest in poetry, he sent his son abroad to further his training in chemistry and engineering.
While studying in Paris, Nobel met Italian chemist Ascanio Sobrero, who in 1847 invented nitroglycerin, the oily, liquid explosive made by combining glycerin with nitric acid and sulfuric acid.
Innovation from tragedy
Though nitroglycerine was considered too unsafe to have any practical use, the Nobel family — which now had several profitable enterprises in Russia and Sweden — continued to investigate its potential for commercial and industrial uses.
But their inquiries had tragic results: In 1864, Alfred’s younger brother Emil and several other people were killed in an explosion at one of their factories in Sweden.
The disaster encouraged Alfred to try to find a way to make nitroglycerin safe. Success didn’t come easily: Early experiments included the creation of “blasting oil,” a mixture of nitro and gunpowder, which resulted in several deadly explosions and once killed 15 people when it exploded in a storeroom in San Francisco.
Finally, in 1867, Alfred Nobel found that by mixing nitroglycerin with diatomaceous earth (known as kieselguhr in German), the resulting compound was a stable paste that could be shaped into short sticks that mining companies might use to blast through rock.
Nobel patented this invention as “dynamite,” from the Greek word dunamis, or “power.”
The invention of dynamite revolutionized the mining, construction and demolition industries. Railroad companies could now safely blast through mountains, opening up vast stretches of the Earth’s surface to exploration and commerce.
As a result, Nobel — who eventually garnered 355 patents on his many inventions — grew fantastically wealthy.
Bertha von Suttner, Alfred Nobel’s good friend. 1905 Nobel Peace Prize Recipient. Her Peace Activities undoubtedly influenced Alfred Nobel to establish a prize for peace along with the prizes for science and literature. “Inform me, convince me, and then I will do something great for the movement.” Alfred Nobel said to Bertha von Suttner. Image source: nobelprize.org.
‘Merchant of death’
Dynamite, of course, had other uses, and it wasn’t long before military authorities began using it in warfare, including dynamite cannons used during the Spanish-American War. Though he’s widely credited with being a pacifist, it’s not known whether Nobel approved of dynamite’s military use or not. Nonetheless, he found out what others thought of his invention when, in 1888, his brother Ludvig died. Through some journalistic error, Alfred’s obituary was widely printed instead, and he was scorned for being the man who made millions through the deaths of others.
Once a French newspaper wrote “Le marchand de la mort est mort,” or “the merchant of death is dead. “The obituary went on to describe Nobel as a man “who became rich by finding ways to kill more people faster than ever before.”
Nobel was reportedly stunned by what he read, and as a result became determined to do something to improve his legacy.
One year before he died in 1896, Nobel signed his last will and testament, which set aside the majority of his vast estate to establish the five Nobel Prizes, including one awarded for the pursuit of peace.
The Black Death: A Timeline of the Gruesome Pandemic
One of the worst plagues in history arrived at Europe’s shores in 1347. Five years later, some 25 to 50 million people were dead.
Nearly 700 years after the Black Death swept through Europe, it still haunts the world as the worst-case scenario for an epidemic. Called the Great Mortality as it caused its devastation, this second great pandemic of Bubonic Plague became known as the Black Death in the late 17th Century.
Modern genetic analysis suggests that the Bubonic plague was caused by the bacterium Yersinia pestis or Y. pestis. Chief among its symptoms are painfully swollen lymph glands that form pus-filled boils called buboes. Sufferers also face fever, chills, headaches, shortness of breath, hemorrhaging, bloody sputum, vomiting and delirium, and if it goes untreated, a survival rate of 50 percent.
During the Black Death, three different forms of the plague manifested across Europe. Below is a timeline of its gruesome assault on humanity.
Black Death Emerges, Spreads via the Black Sea
FRESCO BY AN ANONYMOUS PAINTER DEPICTING ‘THE TRIUMPH OF DEATH.’ DEATH AS A SKELETON RIDES A SKELETAL HORSE AND PICKS OFF HIS VICTIMS.
The strain of Y. pestis emerges in Mongolia, according to John Kelly’s account in The Great Mortality. It is possibly passed to humans by a tarabagan, a type of marmot. The deadliest outbreak is in the Mongol capital of Sarai, which the Mongols carry west to the Black Sea area.
Mongol King Janiberg and his army are in the nearby city of Tana when a brawl erupts between Italian merchants and a group of Muslims. Following the death of one of the Muslims, the Italians flee by sea to the Genoese outpost of Caffa and Janiberg follow on land. Upon arrival at Caffa, Janiberg’s army lays siege for a year but they are stricken with an outbreak. As the army catapults the infected bodies of their dead over city walls, the under-siege Genoese become infected also.
Both sides in the siege are decimated and survivors in Caffa escape by sea, leaving behind streets covered with corpses being fed on by feral animals. One ship arrives in Constantinople, which, once infected, loses as much as 90 percent of its population.
Another Caffan ship docks in Sicily, the crew barely alive. Here the plague kills half the population and moves to Messina. Fleeing residents then spread it to mainland Italy, where one-third of the population is dead by the following summer.
The plague arrives in France, brought by another of the Caffa ships docking in Marseille. It spreads quickly through the country.
A New Strain Enters Europe
THE PLAGUE IN TOURNAI, 1349.
A different plague strain enters Europe through Genoa, brought by another Caffan ship that docks there. The Genoans attack the ship and drive it away, but they are still infected. Italy faces this second strain while already battling the previous one.
- Pestisalso heads east from Sicily into the Persian Empire and through Greece, Bulgaria, Romania and Poland, and south to Egypt, as well as Cyprus, which is also hit with destruction from an earthquake and deadly tidal wave at the same time.
Venice faces its own outbreak by pioneering the first organized response, with committees ordering ship inspections and burning those with contagions, shutting down taverns, and restricting wine from unknown sources. The canals fill with gondolas shouting official instructions for disposing of dead bodies. Despite those efforts, the plague kills 60 percent of the Venetian population.
The plague awakes an anti-Semitic rage around Europe, causing repeated massacres of Jewish communities, with the first one taking place in Provence, where 40 Jews were murdered.
The plague enters England through the port of Melcombe Regis, in Dorset. As it spreads through the town, some escape by fleeing inland, inadvertently spreading it further.
Though it had been around for ages, leprosy grew into a pandemic in Europe in the Middle Ages. A slow-developing bacterial disease that causes sores and deformities, leprosy was believed to be a punishment from God that ran in families.
The Black Death haunts the world as the worst-case scenario for the speed of disease’s spread. It was the second pandemic caused by the bubonic plague, and ravaged Earth’s population. Called the Great Mortality as it caused its devastation, it became known as the Black Death in the late 17th Century.Read more: Social Distancing and Quarantine Were Used in Medieval Times to Fight the Black Death
In another devastating appearance, the bubonic plague led to the deaths of 20 percent of London’s population. The worst of the outbreak tapered off in the fall of 1666, around the same time as another destructive event—the Great Fire of London. Read more: When London Faced a Pandemic—And a Devastating Fire
The first of seven cholera pandemics over the next 150 years, this wave of the small intestine infection originated in Russia, where one million people died. Spreading through feces-infected water and food, the bacterium was passed along to British soldiers who brought it to India where millions more died. Read more: How 5 of History’s Worst Pandemics Finally Ended
The first significant flu pandemic started in Siberia and Kazakhstan, traveled to Moscow, and made its way into Finland and then Poland, where it moved into the rest of Europe. By the end of 1890, 360,000 had died.Read more: The Russian Flu of 1889: The Deadly Pandemic Few Americans Took Seriously
The avian-borne flu that resulted in 50 million deaths worldwide, the 1918 flu was first observed in Europe, the United States and parts of Asia before spreading around the world. At the time, there were no effective drugs or vaccines to treat this killer flu strain. Read more: How U.S. Cities Tried to Halt the Spread of the 1918 Spanish Flu
Starting in Hong Kong and spreading throughout China and then into the United States, the Asian flu became widespread in England where, over six months, 14,000 people died. A second wave followed in early 1958, causing about 1.1 million deaths globally, with 116,000 deaths in the United States alone.Read more: How the 1957 Flu Pandemic Was Stopped Early in Its Path
First identified in 1981, AIDS destroys a person’s immune system, resulting in eventual death by diseases that the body would usually fight off. AIDS was first observed in American gay communities but is believed to have developed from a chimpanzee virus from West Africa in the 1920s. Treatments have been developed to slow the progress of the disease, but 35 million people have died of AIDS since its discoveryRead more: The History of AIDS
First identified in 2003, Severe Acute Respiratory Syndrome is believed to have started with bats, spread to cats and then to humans in China, followed by 26 other countries, infecting 8,096 people, with 774 deaths.Read more: SARS Pandemic: How the Virus Spread Around the World in 2003
A group of religious zealots known as the Flagellants first begin to appear in Germany. These groups of anywhere from 50 to 500 hooded and half-naked men march, sing and thrash themselves with lashes until swollen and bloody. Originally the practice of 11th-century Italian monks during an epidemic, they spread out through Europe. Also known for their violent anti-Semitism, the Flagellants mysteriously disappear by 1350.
The plague hits Marseille, Paris and Normandy, and then the strain splits, with one strain moving onto the now-Belgian city of Tournai to the east and the other passing through Calais. and Avignon, where 50 percent of the population dies.
The plague also moves through Austria and Switzerland, where a fury of anti-Semitic massacres follow it along the Rhine after a rumor spreads that Jews had caused the plague by poisoning wells, as Jennifer Wright details in her book, Get Well Soon, History’s Worst Plagues and the Heroes Who Fought Them. In towns throughout Germany and France, Jewish communities are completely annihilated. In response, King Casimir III of Poland offers a safe haven to the persecuted Jews, starting a mass migration to Poland and Lithuania. Marseilles is also considered a safe haven for Jews.
Black Death Reaches London, Scotland and Beyond
FLAGELLANTS, KNOWN AS THE BROTHERS OF THE CROSS, SCOURGING THEMSELVES AS THEY WALK THROUGH THE STREETS IN ORDER TO FREE THE WORLD FROM THE BLACK DEATH, IN THE BELGIUM TOWN OF TOURNAI
Following the infection and death of King Edward III’s daughter Princess Joan, the plague reaches London, according to King Death: The Black Death and its Aftermath in Late-Medieval England by Colin Platt. As the devastation grows, Londoners flee to the countryside to find food. Edward blames the plague on garbage and human excrement piled up in London streets and in the Thames River.
One of the worst massacres of Jews during the Black Death takes place on Valentine’s Day in Strasbourg, with 2,000 Jewish people burned alive. In the spring, 3,000 Jews defend themselves in Mainz against Christians but are overcome and slaughtered.
The plague hits Wales, brought by people fleeing from Southern England, and eventually kills100,000 people there.
Vikings, Crippled by Plague, Halt Exploration
An English ship brings the Black Death to Norway when it runs aground in Bergen. The ship’s crew is dead by the end of the week and the pestilence travels to Denmark and Sweden, where the king believes fasting on a Friday and foregoing shoes on Sunday will please God and end the plague. It doesn’t work, killing two of the king’s brothers and moving into Russia and also eastern Greenland.
Scotland, having so far avoided the plague, hopes to take advantage of English weakness by amassing an army and planning an invasion. While waiting on the border to begin the attack, troops became infected, with 5,000 dying. Choosing to retreat, the soldiers bring the disease back to their families and a third of Scotland perishes.
Black Death Fades, Leaving Half of Europe Dead
The plague’s spread significantly begins to peter out, possibly thanks to quarantine efforts, after causing the deaths of anywhere between 25 to 50 million people, and leading to the massacres of 210 Jewish communities. All total, Europe has lost about 50 percent of its population.
With the Black Death considered safely behind them, the people of Europe face a changed society. The combination of the massive death rate and the numbers of survivors fleeing their homes sends entrenched social and economic systems spiraling. It becomes easier to get work for better wages and the average standard of living rises.
With the feudal system dying, the aristocracy tries to pass laws preventing any further rise by the peasants, leading to upheaval and revolution in England and France. Significant losses within older intellectual communities brought on an unprecedented opportunity for new ideas and art concepts to take hold, directly leading to the Renaissance and a more youthful, enlightened period of human history.
The Bubonic Plague never completely exits, resurfacing several times through the centuries.
THE GREAT PLAGUE OF LONDON, 1665
The Great Plague of London in 1665 was the last in a long series of plague epidemics that first began in London in June 1499. The Great Plague killed between 75,000 and 100,000 of London’s rapidly expanding population of about 460,000.
The Great Plague of London, lasting from 1665 to 1666, was the last major epidemic of the bubonic plague to occur in England.
First suspected in late 1664, London’s plague began to spread in earnest eastwards in April 1665 from the destitute suburb of St. Giles through rat-infested alleys to the crowded and squalid parishes of Whitechapel and Stepney on its way to the walled City of London.
THE GREAT PLAGUE AT ITS PEAK
By September 1665, the death rate had reached 8,000 per week. Helpless municipal authorities threw their earlier caution to the wind and abandoned quarantine measures. Houses containing the dead and dying were no longer locked. London’s mournful silence was broken by the noise of carts carrying the dead for burial in parish churches or communal plague pits such as Finsbury Field in Cripplegate and the open fields in Southwark.
Well-off residents soon fled to the countryside, leaving the poor behind in impoverished and decrepit parishes. Tens of thousands of dogs and cats were killed to eliminate a feared source of contagion, and mounds of rotting garbage were burned. Purveyors of innumerable remedies proliferated, and physicians and surgeons lanced buboes and bled black spots in attempts to cure plague victims by releasing bad bodily humors.
Plague Orders, first issued by the Privy Council in 1578, were still effective in 1665. These edicts prohibited churches from keeping dead bodies on their premises during public assemblies or services, and carriers of the dead had to identify themselves and could not mix with the public.
Between 1665 and 1666, London saw one of its worst outbreaks of the plague since 1348. The government eventually introduced public health measures that restricted people’s movement in a way that would not be seen in London again until the COVID-19 outbreak in 2020. Now, just a couple of years after the outbreak of COVID, and while much of the world is still feeling its lasting effects, it is especially interesting to consider how people in the past dealt with very similar public health
Death Travels up the River: The Great Plague Arrives in London
Two women lying dead in a London street during the great plague, 1665, one with a child who is still alive. Etching after R. Pollard II, via the Wellcome Collection, London
London was well acquainted with the plague by the seventeenth century, as there had been several outbreaks in the city since 1348. Despite this, the Great Plague of London in 1665-1666 was perhaps the worst. Estimates report that about 15% of London’s population was lost, and while official records report 68,596 deaths, it is more accurate to assume that the number was probably over 100,000.
The disease first arrived in 1665 in St-Giles-in-the-Fields, a parish just outside the city walls. It then spread into the heart of the city, and by September, it was reported that 7,165 people in London had died in just one week. Transported through the overcrowded city by rats infected with the bacterium Yersinia pestis to the seventeenth-century inhabitants of London, the disease appeared to not discriminate against the victims it chose.
In the end, the only thing that stopped the plague in 1666 was the Great Fire of London, which ripped through the city and destroyed a lot of the infrastructure as well as the infected rats and fleas. Prior to this, the local government had attempted to put in place some public health measures in order to prevent the spread of the disease.
A street during the plague in London with a death cart and mourners, color wood engraving by E. Evans, via the Wellcome Collection, London
One of the two main ways the government attempted to control the spread of the disease was by “shutting up” houses. This early concept of quarantine was born in 14th century Venice, where ships were held at ports for 40 days after their arrival to ensure that they did not bring disease into the city. The word quarantine comes from the Italian quaranta giorni, which translates to “forty days.”
The concept has evolved over time, and it is now defined in the Cambridge dictionary as “a specific period of time in which a person or animal that has a disease, or may have one, must stay or be kept away from others in order to prevent the spread of the disease.”
The concept was loosely employed in seventeenth-century England. In 1630, the Privy Council in London ordered that any houses infected with plague be “shut up.” The process began when someone passed away. Government-appointed “searchers” would be sent out in order to ascertain how the individual had died. If it were understood to be the plague, the house would be “shut up.”
Understandably, the thought of being locked in their own homes and left either to die of the plague or to catch it off a fellow family member did not appeal to most. It was therefore common for individuals who were aware they had the plague before the searchers were sent to disguise their malady. Those who were wealthy enough sometimes even resorted to bribery to avoid being locked in their house to eventually die. Because these searchers were often older, poorer women, they were extremely likely to take these bribes.
Two men discovering a dead woman in the street during the great plague of London, wood engraving by J. Jellicoe after H. Railton, via the Wellcome Collection, London
In order to ensure their rules were adhered to, guards were placed outside the doors of such houses to ensure that no one left. The local constable padlocked the doors to the homes; they were then marked with a red cross with the words “Lord had mercy upon us” written alongside it. This was done to prevent people from entering the home and warn others that those inside were infected.
The law stated that this quarantine should last 20 days; however, this period was extended if one of the individuals inside passed away. During this period, the houses marked with these crosses were looked upon with immense fear. There were few offers of help from the outside, and Samuel Pepys, a London resident at the time, reports that “…a gentleman walking by called to us to tell us that the house was shut up of the sickness. So we with great affright turned back, being holden to the gentlemen; and went away.”
There were also reports of escapes. Naturally, healthy people did not like the idea of being locked within an infected house for 20 days, where they would more than likely catch the disease themselves. Eventually, these orders developed into sending infected people to Pest Houses.
More Extreme Measures: The Pest Houses
Pest house (isolation hospital in times of plague), Tothill Fields, Westminster, London, c. 1840, via the Wellcome Collection, London
Alongside at-home quarantine, the Privy Council employed another method to control the plague’s spread: Pest Houses. The Earl of Craven stated that shutting up families within their homes was inhumane and ineffective. He argued for the use of Pest Houses, which were effectively isolation hospitals where sick people, or those who had been in contact with the disease, could be taken until they recovered.
If searchers who were sent out to look for people who had the disease but had not identified themselves discovered someone with the plague, they could send the suffering individual to the local Pest House rather than back to their own house to isolate
It was up to the families whether they would move with an infected relative to the Pest House or stay at their home and quarantine. If the entire family went to the Pest House, then the infected home would be quarantined. The door would be marked with a red cross; however, no inscription would be made in order to show that the house was empty. Again, guards would be stationed outside the house to ensure that no one entered them or looted them.
Records that survive from the period and depict the construction of Pest Houses show that they were made up of two buildings: one for the infected and one for the healthy but exposed. They were both designed in the same fashion: tall stone walls and big windows. The big windows were to ensure airflow and to release any miasmas (bad smells) from the buildings as it was believed they were what caused disease.
These establishments were ruled over by a master or mistress, who, in turn, employed nurses and watchmen. The gates around the property were locked to prevent people from escaping.
A physician wearing a 17th-century plague preventive, via theWellcome Collection, London
The use of masks was also employed during the outbreak of the plague, but not in the way we may assume. Normal individuals were not wearing masks; doctors were. The plague mask has become a distinct visual of early modern medicine, but why was it worn?
Christian J. Mussap has accredited the introduction of this mask and the entire outfit to the French doctor Charles de Lorme. De Lorme described the beaked mask as:
“… half a foot long, shaped like a beak, filled with perfume with only two holes, one on each side near the nostrils, but that can suffice to breathe and carry along with the air one breathes the impression of the [herbs] enclosed further along in the beak.”
The reason doctors adorned this mask filled with herbs was because of the belief in miasmas, or bad smells. The dominant medical theory at the time stated that disease was spread through miasmas. Thus, by filling their masks with nice-smelling herbs, doctors ensured the disease could not be passed on to them while working with patients.
The most common substance used in the mask was theriac, a mixture of over 55 herbs and other substances like honey or cinnamon. Unfortunately for those who wore these masks, these herbs were ineffective against plague since it was actually caused by bacteria.
According to Garrett Rays Encyclopedia of Infectious Diseases, the first mention of the iconic plague doctor is found during the 1619 plague outbreak in Paris, in the written work of royal physician Charles de Lorme, serving King Louis XIII of France at the time.
The Great Plague of 1665 was the last major plague in England. Before the Great Plague, England had had outbreaks of plague (meaning many people got the disease) every few decades. For example:
- The 1603 plague killed 30,000 Londoners;
- The 1625 plague killed about 35,000 people; and
- The 1636 plague killed about 10,000 people.
Another Disaster Brings the End
Detail of London Scenes of the Plague, 1665-1666, via National Archives UK
Fortunately (or unfortunately) for the inhabitants of seventeenth-century London, another one of the city’s worst disasters took place in 1666. The Great Fire of London ripped through a large section of the city, thus killing off a lot of the infection. The buildings in London were made of timber with thatched rooves and built extremely close together, meaning they caught fire at an alarming rate.
The situation was only made worse by the fact that London had no organized fire brigade at this time. Attempts were made to control the fire; however, little could be done to prevent the blaze from spreading.
Arguments have been made for and against the idea that the fire halted the spread of the plague. Some, like Meriel Jeater, have argued that the plague was actually declining prior to the outbreak of the fire. Jeater contends that the fire couldn’t possibly have ended the plague because the fire only spread through about a quarter of London. Furthermore, the areas most affected by the plague, Southwark, Clerkenwell, and Whitechapel, were not touched by the fire.
Survey of the ruins caused by the Great Fire of London, 1667, via the British Library, London
There are various ways the plague could have decreased on its own. For example, because there would be a rat epidemic prior to a human epidemic, it could have reached a point where there were simply no rats left to act as reservoirs for the disease. This, combined with the fact that much of the human population had either died or fled and the fact that the colder months would have made it harder for fleas to survive, meant that there is a possibility the disease struggled to keep infecting people at the rate it once had.
Whatever caused the decline of the plague after 1666, there is still no doubt that while it terrorized London with full force, it remained a great source of fear and unease to many. Not only was the disease associated with fears of death and suffering, but also of being separated from one’s family members.
As one of the most famous fires in history, the Great Fire of 1666 swept through the capital, leaving a trail of devastation and desperate Londoners behind it.
The Great Fire Of London 1666
Indeed, when the lord mayor of London, sir Thomas bloodworth was woken up to be told about the fire, he replied pish!.
On September 2nd, 1666, a tiny spark in a bakery oven ignited the worst fire that London has ever seen. Some of the poorer houses had walls covered with tar, which kept. A major fire has broken out at London’s elephant and castle tube station with video showing violent. London fire chief resigns after Grenfell controversy. On Sunday, September 2, 1666, the fire began accidentally
Grenfell tower had a ‘stay put’ fire policy. The London fire brigade initially announced that 10 fire engines and approximately 70 firefighters were on the scene to fight the blaze, and it later updated that to 15 engines and about 100. The fire was brought under control by 3.33 p.m. A large fire ripped through a railway arch beside elephant and castle station in south London on Monday, sending a ball of flame and plumes of black smoke billowing up high into the sky above the. 020 8680 9295 • www.londonfire.net.
Some of the poorer houses had walls covered with tar, which kept.
Approximately 100 firefighters are battling the blaze near elephant and castle station, the London fire brigade said. In a tweet, the service said: And that had to seem like it was true in 1666, when fire swept through London and destroyed a massive part of the city. They urged locals to stay away and keep their windows and doors closed. A major fire has broken out at London’s elephant and castle tube station with video showing violent. And maintenance of fire equipment, including: We provide numerous community services and programs What’s stopping you from joining a world class fire service and helping us keep London safe? On September 2nd, 1666, a tiny spark in a bakery oven ignited the worst fire that London has ever seen. Indeed, when the lord mayor of London, Sir Thomas Bloodworth was woken up to be told about the fire, he replied pish! ‘Some of the poorer houses had walls covered with tar, which kept. The great fire of London was a disaster waiting to happen.
The fire gutted the medieval city of london inside the old roman city wall.it threatened but did not reach the city of westminster (today’s west end), charles ii’s palace of whitehall, and most of the suburban slums. A major fire has broken out at a railway station in london on monday afternoon. A major fire has broken out at london’s elephant and castle tube station with video showing violent.
The Great Fire of London of 1666 was the third occasion on which St Paul’s Cathedral was seriously damaged by fires in its 600 year history.
Great Fire of London, (September 2–5, 1666), the worst fire in London’s history. It destroyed a large part of the City of London, including most of the civic buildings, old St. Paul’s Cathedral, 87 parish churches, and about 13,000 houses.
On Sunday, September 2, 1666, the fire began accidentally in the house of the king’s baker in Pudding Lane near London Bridge. A violent east wind encouraged the flames, which raged during the whole of Monday and part of Tuesday. On Wednesday the fire slackened; on Thursday it was extinguished, but on the evening of that day the flames again burst forth at The Temple. Some houses were at once blown up by gunpowder, and thus the fire was finally mastered. Many interesting details of the fire are given in Samuel Pepys’s Diary. The river swarmed with vessels filled with persons carrying away as many of their goods as they were able to save. Some fled to the hills of Hampstead and Highgate, but Moorfields was the chief refuge of the houseless Londoners.
Within a few days of the fire, three different plans were presented to the king for the rebuilding of the city, by Christopher Wren, John Evelyn, and Robert Hooke; but none of these plans to regularize the streets was adopted, and in consequence the old lines were in almost every case retained. Nevertheless, Wren’s great work was the erection of St. Paul’s Cathedral and the many churches ranged around it as satellites. Hooke’s task was the humbler one of arranging as city surveyor for the building of the houses.
The history of the United Kingdom began in the early eighteenth century with the Treaty of Union and Acts of Union. The core of the United Kingdom as a unified state came into being in 1707 with the political union of the kingdoms of England and Scotland, into a new unitary state called Great Britain.[a] Of this new state of Great Britain, the historian Simon Schama said:
1559: Queen Elizabeth I crowned. She was a Protestant queen who ruled for 44 years. It was a time of great wealth for the country, although many thousands are made homeless because of changes in land use.
1588: The Armada. A group of ships from Spain tried to invade England. They were defeated.
1592: Scottish parliament becomes Presbyterian. This is a type of Protestant Christianity, influenced by the teachings of John Calvin.
1603: The start of the Stuart dynasty. King James VI of Scotland was a close relation of the English Queen Elizabeth I. He was crowned as James I of England after her death because she has no children. It brought the two nations together (uneasily).
1642: The Civil War started. King Charles I was not a good leader and wanted money for a war with Scotland. Parliament did not want to help him. People who supported the king (Cavaliers) fought people who supported Parliament (Roundheads). About 10% of the population died in the fighting.
1649: Britain became a republic (called ‘the Commonwealth’). King Charles I had his head cut off. A military leader called Oliver Cromwell took control. He became a dictator.
1660: The Restoration of the Monarchy. Cromwell died in 1658 and his son Richard took over. He was not a good leader. Charles I’s son was invited back to the country to be King Charles II.
1665: The Great Plague of London. About 20% of London’s population died of bubonic plague.
1666: The Great Fire of London. A fire that started in a bakery destroyed 80% of the city.
1689: The Glorious Revolution. King James II (King Charles II’s brother) was unpopular – and Catholic. He fled abroad after William of Orange (the husband of his Protestant daughter Mary) came with an army. Mary and William became joint monarchs, known as William III and Mary.
1692: The Glencoe Massacre. Catholics in Scotland were told to swear their support of the new king William III (a Protestant) by January 1, 1692. The chief of the MacDonald clan did it too late. In return, 34 men, 2 women and 2 children were killed by soldiers of the Earl of Argyll on the orders of the king.
1707: Great Britain is created. The Treaty of Union between Scotland and England United Kingdom of Great Britain was made, with a British parliament in Westminster.
1714: The start of the Georgian era. Queen Anne died and her nearest Protestant relative became the new king, George I. He was from Germany. This was the start of a time of great wealth and colonial expansion.
1715: First Jacobite Rebellion. Catholics who wanted James II of England back on the throne (called Jacobites) fought Protestants who supported the new king George I. The fighting ended when the grandson of James II (known as ‘Bonnie Prince Charlie’) lost the Battle of Culloden in 1746.
1720: South Sea Bubble. Thousands of people went bankrupt and many took their own life when the price of shares in the South Sea Company collapsed.
1780s: The Highland Clearances. Over 100 years, people in Highland Scotland were forced from their villages and farms so the land could be used for sheep. Thousands of people emigrated, many to Ireland or North America.
1798: The Irish Rebellion. Irish people fought against British rule, with support from the French. Nearly 30,000 people died. Eventually, the British won.
1801: The UK is created. Because of the Irish rebellion, Britain dissolved the Irish parliament and moved its responsibilities to the British parliament. This created the United Kingdom of Great Britain and Ireland.
1825: The first passenger railway is built. It goes between Stockton and Darlington. Soon there were railways nearly everywhere. Many were shut in the 1960s.
1834: Abolishment of slavery. Slavery becomes illegal across most of the British Empire after a new law is passed. There was a transitional period that lasted until 1838. Some areas had to wait until 1843: St Helena, Ceylon (now Sri Lanka) and places in India controlled by the East India Company. A new system of ‘indentured labourers’ was introduced to replace slavery; for many people it was not much better.
1837: The start of the Victorian era. During the reign of Queen Victoria, the British Empire grew until it had a population of over 400 million people. It included countries like India, Australia and much of Africa. Most of these countries are now independent.
1845-50: Irish Potato Famine. Over 1 million people died and about 1 million emigrated when a disease destroyed potatoes, the only food of the poor. During this time, many other foods were grown and sent to Britain. This made Ireland even more determined to become independent.
1851: The Great Exhibition. This trade fair in London showed 100,000 of the most amazing objects from the British Empire. It was held in a very big glass building called the ‘Crystal Palace’ and was visited by 6 million people, including Queen Victoria.
1901: The start of the Edwardian era. After Queen Victoria’s death, her son became King Edward VII. He died in 1910, but the ‘Edwardian era’ is often considered to last until 1914. Britain changed a lot after World War 1, so the Edwardian era marks the last days of the British Empire and the social system of large country houses and servants.
1903: The Suffragettes. For 11 years, women from the Women’s Political and Social Union (called ‘Suffragettes’) fought for women to get the vote. After World War I, women over 30 who own property are allowed to vote. In 1928, everyone over 21 was allowed to vote.
1914–18: World War 1. The war brought social change because women had to do the jobs of the men while they were fighting. Men from many other countries also helped Britain as part of the Allied Powers.
1921: The Catholic southern part of Ireland declared independence from Britain. It became a republic in 1949. Six mainly Protestant counties in the north stayed with Britain and became Northern Ireland (sometimes called ‘Ulster’). Protestants were usually of English or Scottish descent, while Catholics were usually of Irish descent. The impact is still felt today.
1939–45: World War 2. Famous moments included evacuating British soldiers from Dunkirk in France (1940), the Battle of Britain (German air attacks stopped by British pilots, 1940), the Blitz (bombing raids on British cities, 1940-41), and D-Day/Normandy Landings (when the US, Canada and UK invaded German-occupied France, 1944).
1948: The Windrush generation. People from the West Indies were invited to help Britain rebuild after the war or work in the NHS. Over the next decades, workers were invited from many other countries (including India, Pakistan and Bangladesh).
1951: Festival of Britain. An exhibition in London that celebrated British industry, art and science.
1966: Conflict in Northern Ireland. Over 30 years of violence and bombing (known as ‘the Troubles’) start because of tension between Unionists (mostly Protestant, who want Northern Ireland to stay with Britain) and Nationalists/Republicans (mostly Catholic, who want Northern Ireland as part of the Republic of Ireland). A peace deal was signed in 1998, which gave Northern Ireland its own locally-elected government.
1966: England wins the football World Cup. They won 4-2 against Germany.
1972: Bloody Sunday. British troops kill 14 civil rights protestors in Derry, Northern Ireland.
1973: The Three-Day Week. Strikes by coal miners meant there was not enough fuel for power stations. For two months, companies could only use electricity three days a week.
1973: Britain joined the EEC. It was an early version of the European Union.
1978/79: Winter of Discontent. Over 4 million people went on strike, including gravediggers, hospital staff, lorry drivers and rubbish collectors.
1981: Brixton Riots. There were riots in London and some other cities in reponse to racism by police.
1992: The Channel Tunnel opens. It links UK to France by road and rail.
1997: Death of Princess Diana. The Princess was much loved by the public, so her death at such a young age upset many people.
2005: Civil partnerships became legal. Same-sex couples gained the same rights as married couples.
2012: Queen’s Diamond Jubilee. There were celebrations because Queen Elizabeth had been queen for 60 years. London also hosted the Olympic Games.
2016: Brexit vote. 52% of the UK voted to leave the European Union (though in London, Scotland and Northern Ireland most people wanted to stay).
2020: Britain left the European Union.
2022: Queen’s Platinum Jubilee and Death of Queen Elizabeth II. National celebrations took place in June to recognise Queen Elizabeth II’s 70 years on the throne. Sadly, she died a few months later, in September. Her son became King Charles III.
”Behind it all lies the City of London, anxious to preserve its access to the world’s dirty money. The City of London is a money-laundering filter that lets the City get involved in dirty business while providing it with enough distance to maintain plausible deniability . . . a crypto-feudal oligarchy which, of itself, is. . . captured by the international offshore banking industry. It is a gangster regime, cloaked with the “respectability” of the trappings of the British establishment. . . . guaranteed protection. No matter just how nakedly lawless their own conduct.”
“The City is often now described as the largest tax haven in the world, and it acts as the largest center of the global tax avoidance system. An estimated 50% of the world’s trade passes through tax havens, and the City acts as a huge funnel for much of this money.”
The dragons of the City of London
Since the early 17th century a pair of dragons has supported the crest of the City of London in its coat of arms; and in the latter part of the 19th century ornamental boundary markers were erected at points of entry into the City, each surmounted by a dragon clutching the heraldic shield.
The dragons’ introduction seems to have derived from the legend of St George – whose cross has been a City emblem since at least the early 14th century – and may have been specifically linked to a popular misconception that a fan-like object bearing the cross on an earlier crest was a dragon’s wing.
The City dragon is often incorrectly called a griffin, or gryphon, even in some official literature. It is not clear how the confusion arose but the misnomer has become so entrenched that some authorities consider it to have earned a degree of legitimacy. This especially applies to the statue at Temple Bar, where Westminster’s Strand becomes the City’s Fleet Street. The term ‘east of the Griffin’ was once commonly employed to mean ‘east of Temple Bar’, i.e. in the City of London:
“If something unexpected did not happen, it meant another visit to a little office he knew too well in the City, the master of which, more than civil if you met him on a racecourse, … was quite a different person and much less easy to deal with east of the Griffin.”
Where do the City of London dragons come from?
The origin of the dragons isn’t clear. Some say that they come from the story of George and the Dragon as St George is the patron saint of England. The sword and the dragons certainly distinguish the coat of arms of the City of London from those of England. In England, we associate St George with slaying the Dragon and indeed dragons guard the City of London and mark out the different gates around the city i.e Aldersgate, Bishopsgate, Temple Bar, Bridge Gate and Moorgate. As with dragons being used in stories to protect something i.e.; Smaug in J.R.R Tolkein’s 1937 novel The Hobbit protects Erebor (for himself) are the City of London dragons there to protect the city?
The City of London is the original heart of London having been established by the Romans in 55BC. The City of London is surrounded by dragons. The photo above is of one of the two dragons original statues from the Coal Exchange. These two dragons lived under the entrance to the Coal Exchange in Lower Thames Street until when it was demolished in 1863. After that they took up residence on either side of the Victoria Embankment.
Why do dragons guard the City of London?
The ancient City of London is protected by dragons that guard the main roads into the city from perfidious invaders. To help manage the trade in coal, in 1847 a grand building was constructed near the Tower of London, and high above the main entrance, two plinths held two large cast-iron dragons.
Where can I see dragons in London?
These original dragons can be found today at the Victoria Embankment. Half size replicas can be found at High Holborn, Farringdon Street, Aldersgate, Moorgate, Bishopsgate, Aldgate, London Bridge and Blackfriars Bridge. Other buildings in the City are home to the dragons as well.
What are the boundaries of the City of London?
Marked with cast iron dragons in the street, the boundaries of the City of London stretch north from Temple and the Tower of London on the River Thames to Chancery Lane in the west and Liverpool Street in the east.
How many dragon statues are in London?
There are thirteen Dragons around the City of London. Half-size replicas of the original pair of dragons made by Birmingham Guild Limited were erected at main entrances to the City of London in the late 1960s.
Who controls London City?
The corporation is headed by the Lord Mayor of the City of London
What is the earliest image of the crest of the City of London?
Description and blazon
The Corporation of the City of London has a full achievement of armorial bearings consisting of a shield on which the arms are displayed, a crest displayed on a helmet above the shield, supporters on either side and a motto displayed on a scroll beneath the arm.
The blazon of the arms is as follows:
Arms: Argent a cross gules, in the first quarter a sword in pale point upwards of the last.
Crest: On a wreath argent and gules a dragon’s sinister wing argent charged on the underside with a cross throughout gules.
Supporters: On either side a dragon argent charged on the undersides of the wings with a cross throughout gules.
The Latin motto of the City is Domine dirige nos, which translates as “Lord, direct (guide) us”. It appears to have been adopted in the 17th century, as the earliest record of it is in 1633.
Armorials of the Great Twelve Livery Companies of the City of London
The dragon boundary marks are cast iron statues of dragons (sometimes mistaken for griffins) on metal or stone plinths that mark the boundaries of the City of London.
The dragon boundary marks are cast iron statues of dragons (sometimes mistaken for griffins) on metal or stone plinths that mark the boundaries of the City of London.
In fact, the City of London is so independent that it has its own flag, crest, police force, ceremonial armed forces, and a mayor who has a special title, the Right Honorable, the Lord Mayor of London. Oddly enough, if the monarch wants to enter the City of London, she first must ask the Lord Mayor for permission.
Explained: The secret City of London which is not part of London
The famed English author, poet, and literary critic Samuel Johnson once said that “when a man is tired of London, he is tired of life; for there is in London all that life can afford.” Say this in 2017 and it still holds true, as London continues being one of the most visited cities in the world.
Known for its amazing bridges, modern buildings, and beautiful historic landmarks such as the Tower of London, Westminster Abbey, Houses of Parliament, and Buckingham Palace, London feels like the center of the world and, according to some, it is the world’s financial capital.
While most people will tell you that London is one great big city, the truth is that there is one more London that it is inside of London.
The City of London situated within the city called London is actually the original London. However, to make things clearer, we need to go back nearly 2,000 years in history, to when the Romans invaded Britain and founded the settlement of Londinium.
When the Romans arrived in Britain in 43 AD, there was no permanent settlement on the site of the City of London but it didn’t take long before Londinium was established. Its access to the River Thames transformed the new settlement into an important trading center. The town began growing rapidly.
About two centuries after its establishment, Londinium was already a large Roman city, with a population of over 10,000 people. It was one of the most important trade centers in the Roman Empire, and the Romans took good care of it, constructing forts for protection, including the gigantic London Wall, parts of which can be still seen today.
A model of London in 85 to 90 AD on display in the Museum of London, depicting the first bridge over the Thames. Author: Steven G. Johnson. CC BY-SA 3.0
The London Wall defined the shape and size of the city. But what is more interesting, the City of London remained within the wall over the course of more than 18 centuries and didn’t extend beyond.
Life in Londinium continued after the Romans left and even though the city experienced some hard times and fell into a decline, its location proved to be so good, it was brought back to its former glory. Trade thrived again and the city grew both economically and in population.
The prosperous trading center didn’t go unnoticed by William the Conqueror, who decided not to attack it and instead came in a friendly fashion to London, offering its citizens some privileges and recognizing their liberties, but in return, they were asked to recognize him as the new King.
The citizens of London did recognize William the Conqueror as the new King and gained their authority and liberties. Throughout the years, many monarchs rose and fell, but the City of London and the liberties of its citizens remained intact.
A surviving fragment of the London Wall behind Tower Hill Station (2005). Author: John Winfield. CC BY-SA 2.0
Although some monarchs saw the city as a threat, thinking that it was too independent, powerful, and rich, none of them made an attempt to subordinate the City of London to its rule. The area was not dependent on another power and it had the sovereignty to govern, tax, and judge itself.
Westminster was built nearby with the purpose of competing with the powerful City of London and this was when the second London was born. The new city expanded rapidly and it eventually surrounded the City of London.
By 1889 the County of London was formed and the name was more often used for the larger area that surrounded the original City of London.
Top 10 Facts About The City Of London
London covers 600 square miles and has a population of 8.6 million, but only its oldest part, just one square mile in size, is called the City of London. That is where the Romans founded the city of Londinium shortly after they arrived in 43AD. Today, the City of London – or simply “the City” – is the centre of London’s finance industry. The City is where you find the Bank of England, the London Stock Exchange, the investment banks, insurance companies and financial markets. It combines the most modern headquarters buildings with Roman remains and medieval churches, and it is great to explore on foot. Here are 10 facts about the City:
- Geography plays a key role in the success of the City of London. Unlike New York, Tokyo or Hong Kong, the City’s business day overlaps those of all the world’s financial centres. The City can trade with the Eastern Hemisphere in the morning and the Western Hemisphere in the afternoon, allowing its dealers to trade in all major markets in one day.
- Though it is the oldest part of London, the City doesn’t look very old. That is because it has been almost entirely destroyed and rebuilt twice: once in the Great Fire of London in 1666, and then again after being bombed in the second World War. It is fascinating to track down the structures which survived the Great Fire and World War 2.
The boundaries for the City of London is marked with dragons. Photo Credit: ©Ursula Petula Barzey.
- Everywhere you look there are new buildings being constructed in the City of London, many by the world’s leading architects. About a quarter of the buildings are replaced every 25 years.
- Before 1980, any bank operating in the City of London had to have an office within 10 minutes walk of the Bank of England. This was because in the event of a crisis, the Governor of the Bank of England wanted to have the Chief Executive of every bank in the City in his office within 30 minutes.
A view of the City of London from the Thames River. In the forefront is 20 Fenchurch Street know as the Walkie Talkie Building. In the background is the The Leadenhall Building known as “The Cheesegrater” because of its distinctive wedge shape. Photo Credit: ©Nigel Rundstrom.
- The Bank of England was devised by a Scot, William Patterson, and its first Governor was a Frenchman, John Houblon. It is the central bank for the whole of the United Kingdom, and yet it is called the Bank of England.
- There are over 500 banks resident in the City of London, most of them foreign. The City has more Japanese banks than Tokyo and more American banks than Manhattan.
In the City of London new skyscraper buildings spring up next to old landmarks. Here, 30 St Mary Axe known as the The Gherkin which has been built next to St. Mary Axe a mediaeval church. Photo Credit: ©Nigel Rundstrom.
- The City of London is the centre of global foreign exchange dealing. Over 40% of all the world’s foreign exchange transactions are made in the City – a total of $2.7 trillion per day!
- Some of the City’s most famous institutions started out in coffee houses at the end of the 17th century. Jonathan’s and Garraway’s coffee houses in Exchange Alley saw the first buying and selling of company stocks. Edward Lloyd’s coffee house was where ships and their cargoes could be insured, leading to the foundation of Lloyds of London.
- City life used to be dominated by the Livery Companies, trade guilds who trained craftsmen, set standards and controlled the practice of trades. Over one hundred still exist today even though the trades they represented have vanished from the City of London. Many still occupy grand “livery halls” and survive as social and charitable institutions.
- There are more international telephone calls made from the City of London than anywhere else in the world. This shows the truly global nature of the businesses which operate there. The City of London contains a mix of modern and historical buildings, traditions and stories. A Blue Badge Tourist Guide can show you the hidden parts of the City of Londonalong with the main sights. To make sure you see the full range of London sights a tour of the City of London is a must.
The City Of London (Aka The Crown) Is Controlling The World’s Money Supply
The ‘Crown’ is not owned by the Westminster and the Queen or England.
The City of London has been granted various special privileges since the Norman Conquest, such as the right to run its own affairs, partly due to the power of its financial capital. These are also mentioned by the Statute of William and Mary in 1690.
City State of London is the world’s financial power centre and wealthiest square mile on earth — contains Rothschild controlled Bank of England, Lloyd’s of London, London Stock Exchange, ALL British banks, branch offices of 385 foreign banks and 70 U.S. banks.
It has its own courts, laws, flag and police force — not part of greater London, or England, or the British Commonwealth and PAYS ZERO TAXES!
City State of London houses Fleet Street’s newspaper and publishing monopolies (BBC/Reuters), also HQ for World Wide English Freemasonry and for worldwide money cartel known as The Crown…
For centuries the Bank of England has been center of the worlds fraudulent money system, with its ‘debt based’ (fiat currency).
The Rothschild banking cartel has maintained tight-fisted control of the global money system through:
- The Bank for Intl. Settlements (BIS),
- Intl. Monetary Fund (IMF) and
- World Bank — the central banks of each nation (Federal Reserve in their American colony), and satellite banks in the Caribbean.
They determine w/the stroke of pen the value of ALL currency on earth it is their control of the money supply which allows them to control world affairs (click here for Federal Reserve owners) — from financing both sides of every conflict, through interlocking directorates in weapon manufacturing co.s’, executing global depopulation schemes/ crusades/ genocide, control of food supply, medicine and ALL basic human necessities.
They have groomed their inaudibility through control of the so-called “free press” and wall themselves off w/accusations of anti-Semitism whenever the spotlight is shone upon them.
The Crown is NOT the Royal Family or British Monarch.
The Crown is private corporate City State of London — it’s Council of 12 members (Board of Directors) rule corporation under a mayor, called the LORD MAYOR — legal representation provided by S.J. Berwin.
City of London map
They are known as “The Crown.” The City and its rulers, The Crown, are not subject to the Parliament. They are a Sovereign State within a State. The City is the financial hub of the world.
It is here that the Rothschilds have their base of operations and their centrality of control:
- The Central Bank of England (controlled by the Rothschilds) is located in The City
- All major British banks have their main offices in The City
- 385 foreign banks are located in The City
- 70 banks from the United States are located in The City
- The London Stock Exchange is located in The City
- Lloyd’s of London is located in The City
- The Baltic Exchange (shipping contracts) is located in The City
- Fleet Street (newspapers & publishing) is located in The City
- The London Metal Exchange is located in The City
- The London Commodity Exchange (trading rubber, wool, sugar, coffee) is located in The City
Every year a Lord Mayor is elected as monarch of The City.
The British Parliament does not make a move without consulting the Lord Mayor of The City. For here in the heart of London are grouped together Britain’s financial institutions dominated by the Rothschild-controlled Central Bank of England.
The Rothschilds have traditionally chosen the Lord Mayor since 1820. Who is the present day Lord Mayor of The City? Only the Rothschilds’ know for sure…
How the City of London Came Into Power Inside England
MAYER AMSCHEL BAUER opened a money lending business on Judenstrasse (Jew Street) in Frankfurt Germany in 1750 and changed his name to Rothschild.
Mayer Rothschild had five sons.
The smartest of his sons, Nathan, was sent to London to establish a bank in 1806. Much of the initial funding for the new bank was tapped from the British East India Company which Mayer Rothschild had significant control of. Mayer Rothschild placed his other four sons in Frankfort, Paris, Naples, and Vienna.
In 1814, Nathanael Rothschild saw an opportunity in the Battle of Waterloo. Early in the battle, Napoleon appeared to be winning and the first military report to London communicated that fact. But the tide turned in favor of Wellington.
A courier of Nathan Rothschild brought the news to him in London on June 20. This was 24 hours before Wellington’s courier arrived in London with the news of Wellington’s victory. Seeing this fortuitous event,
Nathan Rothschild began spreading the rumor that Britain was defeated.
With everyone believing that Wellington was defeated, Nathan Rothschild began to sell all of his stock on the English Stock Market. Everyone panicked and also began selling causing stocks to plummet to practically nothing.
At the last minute, Nathan Rothschild began buying up the stocks at rock-bottom prices.
This gave the Rothschild family complete control of the British economy — now the financial centre of the world and forced England to set up a revamped Bank of England with Nathan Rothschild in control.
Ruling ‘Committee of 300′ for The ‘Crown’ names included in the London based corp. are names like:
Why are these ‘Americans’ on a foreign committee… because the Crown STILL owns the UNITED STATES CORPORATION, private corporation!
The Lord Mayor and 12 member council serve as proxies/representatives who sit-in for 13 of the world’s wealthiest, most powerful banking families (syndicates) headed by the Rothschild Dynasty they include:
These families and their descendants run the Crown Corporation of London.
Rockefeller Syndicate runs the American colony through interlocking directorships in JP Morgan Chase/Bank of America and Brown Brothers Harriman (BBH) and Brown Brothers Harriman New York along with their oil oligarchy Exxon-Mobil (formerly multi-headed colossus Standard Oil).
They also manage Rothschild oil asset British Petroleum (BP). The Crown Corporation holds title to world-wide Crown land in Crown colonies like Canada, Australia, New Zealand and many Caribbean Islands.
British parliament and British PM serve as public front for hidden power of these ruling crown families.
“Today the path to total dictatorship in the U.S. can be laid by strictly legal means… We have a well-organized political-action group in this country, determined to destroy our Constitution and establish a one-party state…
“It operates secretly, silently, continuously to transform our Government… This ruthless power-seeking elite is a disease of our century… This group… is answerable neither to the President, the Congress, nor the courts. It is practically irremovable.” — Senator William Jenner, 1954 speech
Broken US-Indigenous treaties: A timeline
As long as the United States has negotiated treaties with Indigenous nations, it has broken those treaties. There is a popular tendency to think of these treaties as inanimate artifacts of the distant past.
This belief, however, is a symptom of the historical amnesia that continues to relegate present-day Indigenous rights issues to the margins. Treaties are, in fact, living documents, which even today legally bind the United States to the promises it made to Native peoples centuries ago. Treaties also acknowledge the inherent sovereignty of Indigenous nations, a fact that has been disputed and undermined in U.S. courts and Congress since 1831, when the Supreme Court ruled that tribes were “domestic dependent nations” without self-determination.
Of the nearly 370 treaties negotiated between the U.S. and tribal leaders, Stacker has compiled a list of 15 broken treaties negotiated between 1777 and 1868 using news, archival documents, and Indigenous and governmental historical reports.
Treaty With the Delawares/Treaty of Fort Pitt (1778)
The 1778 Treaty with the Delawares was the first treaty negotiated between the newly formed United States and an Indigenous nation. The Lenape (Delaware) were already being forced from their ancestral homelands in New York City, the lower Hudson Valley, and much of New Jersey when the Dutch settled there in the 17th century. The treaty stipulated peace between the Lenape and the U.S. as well as mutual support against the British. However, this supposed peace did not last long: In 1782, Pennsylvania militiamen murdered almost 100 Lenape citizens at Gnadenhutten, forcing the Lenape out toward Ohio.
Treaty of Fort Stanwix (1784)
Weakened by the constant encroachment of white settlers after the Revolutionary War, the Iroquois Confederacy was forced to cede part of New York and a large portion of present-day Pennsylvania in the Treaty of Fort Stanwix. In return, the U.S. promised to protect tribal lands from further settlement by white colonists. In the following years, the U.S. did not enforce the treaty terms, and the lands inhabited by the Iroquois Confederacy continued to shrink.
Treaty of Hopewell (1785-86)
The Treaty of Hopewell includes three treaties signed by the U.S. and the Cherokee, Choctaw, and Chickasaw Nations at General Andrew Pickens’ plantation following the Revolutionary War. The treaties supposedly offered the three tribes the protection and friendship of the U.S. and promised no future settlement on tribal lands. Despite these terms, the encroachment of white settlers onto treaty territory was already underway, and future treaties would shrink Cherokee, Choctaw, and Chickasaw lands even further.
Treaty of Canandaigua/Pickering Treaty (1794)
In 1794, the U.S. government and the Haudenosaunee Confederacy, or Six Nations (comprising the Mohawk, Cayuga, Onondaga, Oneida, Seneca, and Tuscarora Nations of New York), signed the Treaty of Canandaigua. In exchange for the Confederacy’s allyship after the Revolutionary War, the U.S. returned over a million acres of Iroquois land that had been previously ceded in the Fort Stanwix Treaty. The Canandaigua Treaty also recognized the sovereignty of the Six Nations to govern themselves and set their own laws.
Despite this apparent act of friendship, the land returned to the Six Nations was lost to U.S. expansion, and the tribes were forced to relocate. While the Onondaga, Seneca, Tuscarora, and Oneida stayed on reservations in New York, the Mohawk and Cayuga moved into Canada.
Treaty of Greenville (1795)
An increasing number of white settlers moved into the Great Lakes region in the 1780s, escalating tension with established Indigenous nations. The Shawnee, Delaware, Miami, Ottawa, Ojibwe, and Potawatomi Nations banded together as the Northwestern Confederacy and assembled an armed resistance to prevent further colonization.
In 1794, a large contingent of the U.S. military, led by General “Mad” Anthony Wayne, was tasked with putting an end to the Northwestern Confederacy’s resistance. The Confederacy was defeated in the Battle of Fallen Timbers and forced to sue for peace. The Treaty of Greenville saw the tribes of the Northwestern Confederacy cede large tracts of land in present-day Michigan, Ohio, Indiana, Wisconsin, and Illinois. The treaty was soon broken, however, by white settlers who continued to expand their reach into treaty lands.
Treaty with the Sioux (1805)
In 1805, General Zebulon Pike mounted an expedition up the Mississippi River without informing the U.S. government. Pike met with a group of Dakota leaders, who allegedly ceded 100,000 acres of land to build a fort and promote U.S. trade in exchange for an unspecified amount of money. Of the seven Dakota leaders, only two signed the treaty. Though Pike valued the purchase at $200,000 in his journal, he left only $200 worth of gifts upon signing. The president never proclaimed the treaty, a necessary step that makes treaties official, and the U.S. adjusted the purchase price to $2,000.
Treaty of Fort Wayne (1809)
In the Treaty of Fort Wayne, the Potawatomi, Delaware, Miami, and Eel River tribes ceded 2.5 million acres of their lands in present-day Michigan, Indiana, Illinois, and Ohio for roughly 2 cents an acre, under pressure from William Henry Harrison, the then-governor of Indiana. Not long after, Harrison led an attack on a camp of followers of Tenkswatawa, the Shawnee Prophet, and Tecumseh, who resisted the encroachment of white settlers on the Ohio Valley Nations. The violence spurred by this attack persisted into the War of 1812.
Indian Removal Act (1830)
Though not technically a treaty, the Indian Removal Act of 1830 functioned as a displacement mechanism and was largely responsible for the treaties created over the following decades. President Andrew Jackson had long been a violent proponent of the forced relocation of Indigenous tribes from the southeast to western areas, leading military efforts against the Creek Nation in 1814 and negotiating many treaties which dispossessed tribes of their lands.
The Indian Removal Act created a process by which the president could exchange tribal lands in the eastern United States for federally designated land west of the Mississippi River by negotiating removal treaties with Indigenous nations. While the act was framed as a peaceful and voluntary process, tribes that did not “cooperate” were made to comply through military force, cheated or tricked out of their land, or subjected to the violence of local white settlers.
Treaty of New Echota (1835)
Following the passage of the Indian Removal Act, facing tremendous pressure to move west, a small group of Cherokees not authorized to act on behalf of the Cherokee people negotiated the Treaty of New Echota. The treaty gave up all Cherokee lands east of the Mississippi River in exchange for $5 million and new territory in Oklahoma. Even though most Cherokee people considered the agreement fraudulent, and the Cherokee National Council formally rejected it in 1836, Congress ratified the treaty.
Two years later, the Treaty of New Echota was used to justify the forced removal of the Cherokee people. In 1838, roughly 16,000 Cherokees were rounded up by the U.S. military and forced to march 5,043 miles to their new lands. Over 4,000 Cherokee people died on the Trail of Tears.
Treaty with the Potawatomi (1836)
In 1832, the Potawatomi Nation signed a peace treaty with the U.S. ensuring the Potawatomi people’s safety on their reservations in Indiana. Still, it wasn’t long before the U.S. broke this treaty. Further negotiations followed, but in 1836, the Potawatomi were forced to sell their land for around $14,000 and move westward. Though many Potawatomi tried to stay, in 1938, the U.S. government enforced their removal by way of a 660-mile forced march from Indiana to Kansas. Of the 859 Potawatomi people who began what would later be known as the Trail of Death, 40 died, many of whom were children.
Fort Laramie Treaty (1851)
The 1851 Fort Laramie Treaty “defined” the territory of the Great Sioux Nation (Dakotas, Lakotas, and Nakotas) in North and South Dakota, Nebraska, Wyoming, and Montana, in exchange for the creation of roads and railways and the promise of the U.S. to protect the Sioux from American citizens. Nevertheless, settlers and the U.S. military violated the treaty and invaded Lakota lands. Disputes over the treaty’s integrity persist, as evidenced by the building of the Dakota Access Pipeline, which was constructed on treaty lands near the Standing Rock Sioux Reservation. In 2016, water protectors and activists established a camp at Standing Rock to prevent the pipeline’s construction, where they were subjected to attack dogs and other methods of excessive force by law enforcement. The pipeline is still operational.
Treaties of Traverse des Sioux and Mendota (1851)
Under threat of military violence from the increasing numbers of white settler-colonists moving into Minnesota, the Dakota and Mendota were forced to cede millions of acres of land in the Treaty of Traverse des Sioux and Mendota in exchange for reservations and $1,665,000—the equivalent of about 7.5 cents per acre. However, the Dakota and Mendota never received either provision. The representatives from the U.S. government who negotiated the treaty tricked the Dakota representatives into signing a third document, which reallocated the funds meant for the Dakota and Mendota to traders to fulfill invented “debts.” The U.S. Senate further violated the treaty by eliminating the provision for reservations.
Land Cession Treaty with the Ojibwe/Treaty of Washington (1855)
In the 1855 Treaty of Washington, the Ojibwe ceded nearly all of their remaining land not already lost to the U.S. during previous treaties. This new treaty also created the Leech Lake and Mille Lacs Reservations and allotted reservation land to individual families. In doing so, the U.S. attempted to subvert the Ojibwe’s traditional relationship with the land by instating a system of private property, as well as forcing the Ojibwe people to become farmers, a departure from their historical lifestyle of hunting, fishing, and gathering. However, it was mutually agreed that the Ojibwe would be able to continue hunting and fishing on ceded territory.
Unfortunately, in the decades following the signing of the treaty, the state of Minnesota outlawed hunting and harvesting without a license on off-reservation land, a direct violation of the treaty. Despite the Supreme Court’s reaffirmation of the Ojibwe’s hunting and gathering rights on ancestral lands in 1999, conflicts over the use of these lands, including for pipeline development, are ongoing.
Medicine Lodge Treaty (1867)
Two years after the culmination of the Civil War, violence against Plains tribes instigated by westward-moving white settlers came to a head. More than 5,000 representatives of the Kiowa, Comanche, Arapaho, Kiowa-Apache, and Southern Cheyenne nations met with U.S. government delegates to ostensibly negotiate peace. Ultimately, the treaty relocated the Comanches and Kiowas onto one reservation and the Cheyennes and Arapahoes onto another. Even though the participating tribes never approved the treaty, Congress ratified it in 1868 and then quickly began violating the terms, withholding payments, preventing hunting, and cutting down the size of reservations.
In 1903, Kiowa chief Lone Wolf sued the U.S. for defrauding the tribes who participated in the Medicine Lodge Treaty. In a devastating ruling that would have grave consequences for Indigenous land rights, the Supreme Court ruled that Congress could legally “abrogate the provisions of an Indian treaty.” In other words, any treaty made between the U.S. and Native American tribes could be broken by Congress, rendering treaties essentially powerless.
Fort Laramie Treaty (1868)
The Fort Laramie Treaty was negotiated with the Sioux (Dakota, Lakota, and Nakota Nations) and the Arapaho Tribe. It established the Great Sioux Reservation, which comprised all of the South Dakota west of the Missouri River, and protected the sacred Black Hills, designating the area as “unceded Indian Territory.” It only took until 1874 for the U.S. to violate the terms of the treaty when gold was discovered in the Black Hills. The boundaries outlined in the treaty were hastily redrawn to allow white Americans to mine the area.
In the 1980 case United States v. Sioux Nation of Indians, the Supreme Court ruled that the U.S. had illegally expropriated the Black Hills, and that the Sioux were entitled to over $100 million in reparations. The Sioux turned down the money, saying that the land had never been for sale. Conflicts over the U.S.’s illegal usage of Sioux lands outlined in the Fort Laramie Treaty are ongoing. In 2018, the Rosebud Sioux Tribe and the Fort Belknap Indian Community sued the Trump administration for violations concerning the permitting of the Keystone XL Pipeline, which was shut down in June 2021
The outplacement and adoption of indigenous children
From the beginning of the colonial period, Native American children were particularly vulnerable to removal by colonizers. Captured children might be sold into slavery, forced to become religious novitiates, made to perform labor, or adopted as family members by Euro-Americans; although some undoubtedly did well under their new circumstances, many suffered. In some senses, the 19th-century practice of forcing children to attend boarding school was a continuation of these earlier practices.
Before the 20th century, social welfare programs were, for the most part, the domain of charities, particularly of religious charities. By the mid-20th century, however, governmental institutions had surpassed charities as the dominant instruments of public well-being.
As with other forms of Northern American civic authority, most responsibilities related to social welfare were assigned to state and provincial governments, which in turn developed formidable child welfare bureaucracies. These were responsible for intervening in cases of child neglect or abuse; although caseworkers often tried to maintain the integrity of the family, children living in dangerous circumstances were generally removed.
The prevailing models of well-being used by children’s services personnel reflected the culture of the Euro-American middle classes.
They viewed caregiving and financial well-being as the responsibilities of the nuclear family; according to this view, a competent family comprised a married couple and their biological or legally adopted children, with a father who worked outside the home, a mother who was a homemaker, and a residence with material conveniences such as electricity.
These expectations stood in contrast to the values of reservation life, where extended-family households and communitarian approaches to wealth were the norm.
For instance, while Euro-American culture has emphasized the ability of each individual to climb the economic ladder by eliminating the economic “ceiling,” many indigenous groups have preferred to ensure that nobody falls below a particular economic “floor.”
In addition, material comforts linked to infrastructure were simply not available on reservations as early as in other rural areas.
For instance, while U.S. rural electrification programs had ensured that 90 percent of farms had electricity by 1950—a tremendous rise compared with the 10 percent that had electricity in 1935—census data indicated that the number of homes with access to electricity did not approach 90 percent on reservations until 2000.
These kinds of cultural and material divergences from Euro-American expectations instantly made native families appear to be backward and neglectful of their children.
As a direct result of these and other ethnocentric criteria, disproportionate numbers of indigenous children were removed from their homes by social workers. However, until the mid-20th century there were few places for such children to go; most reservations were in thinly populated rural states with few foster families, and interstate and interethnic foster care and adoption were discouraged.
As a result, native children were often institutionalized at residential schools and other facilities. This changed in the late 1950s, when the U.S. Bureau of Indian Affairs joined with the Child Welfare League of America in launching the Indian Adoption Project (IAP), the country’s first large-scale transracial adoption program. The IAP eventually moved between 25 and 35 percent of the native children in the United States into interstate adoptions and interstate foster care placements. Essentially all of these children were placed with Euro-American families.
Appalled at the loss of yet another generation of children—many tribes had only effected a shift from government-run boarding schools to local schools after World War II—indigenous activists focused on the creation and implementation of culturally appropriate criteria with which to evaluate caregiving.
They argued that the definition of a functioning family was a matter of both sovereignty and civil rights—that a community has an inherent right and obligation to act in the best interests of its children and that individual bonds between caregiver and child are privileged by similarly inherent, but singular, rights and obligations.
The U.S. Indian Child Welfare Act (1978) attempted to address these issues by mandating that states consult with tribes in child welfare cases. It also helped to establish the legitimacy of the wide variety of indigenous caregiving arrangements, such as a reliance on clan relatives and life with fewer material comforts than might be found off the reservation. The act was not a panacea, however; a 2003 report by the Child Welfare League of America, “Children of Color in the Child Welfare System,” indicated that, although the actual incidence of child maltreatment in the United States was similar among all ethnic groups, child welfare professionals continued to substantiate abuse in native homes at twice the rate of substantiation for Euro-American homes. The same report indicated that more than three times as many native children were in foster care, per capita, as Euro-American children.
Canadian advocates had similar cause for concern. In 2006 the leading advocacy group for the indigenous peoples of Canada, the Assembly of First Nations (AFN), reported that as many as 1 in 10 native children were in outplacement situations; the ratio for nonnative children was approximately 1 in 200. The AFN also noted that indigenous child welfare agencies were funded at per capita levels more than 20 percent under provincial agencies. Partnering with a child advocacy group, the First Nations Child and Family Caring Society of Canada, the AFN cited these and other issues in a human rights complaint filed with the Canadian Human Rights Commission, a signal of the egregious nature of the problems in the country’s child welfare system.
The colonization of the Americas involved religious as well as political, economic, and cultural conquest. Religious oppression began immediately and continued unabated well into the 20th—and some would claim the 21st—century.
Although the separation of church and state is given primacy in the U.S. Bill of Rights (1791) and freedom of religion is implied in Canada’s founding legislation, the British North America Act (1867), these governments have historically prohibited many indigenous religious activities.
For instance, the Northwest Coast potlatch, a major ceremonial involving feasting and gift giving, was banned in Canada through an 1884 amendment to the Indian Act, and it remained illegal until the 1951 revision of the act.
In 1883 the U.S. secretary of the interior, acting on the advice of Bureau of Indian Affairs personnel, criminalized the Plains Sun Dance and many other rituals; under federal law, the secretary was entitled to make such decisions more or less unilaterally. In 1904 the prohibition was renewed.
The government did not reverse its stance on the Sun Dance until the 1930s, when a new Bureau of Indian Affairs director, John Collier, instituted a major policy shift. Even so, arrests of Sun Dancers and other religious practitioners continued in some places into the 1970s.
Native American Tribes in the USA 2023
Did you know:
Nearly half of Native American children live in poverty, making rural reservation communities’ home to one of the minority groups with the most need in the United States.
Vacant FEMA trailers from Katrina given to Indian tribes in need of housing
Nearly six years after the hurricane, the mobile homes that became a symbol of the government’s failed response are finally being put to good use. FEMA has quietly given many of them away to American Indian tribes that are in desperate need of affordable housing.
In the aftermath of the 2005 hurricane, FEMA bought thousands of temporary homes for $20,000 to $45,000 each — both mobile homes and travel trailers.
The mobile homes proved impractical in areas where power and water service had been destroyed. And some people living in travel trailers started to fall sick because the RVs had high levels of formaldehyde, a cancer-causing chemical common in building materials.
People are still living in FEMA’s toxic trailers.
Bengal Famine of 1770
Yet another famine in Bengal, this horrific event killed a third of the population. Largely ruled by the English-owned East India Company, reports of severe drought and crop shortages were ignored, and the company continued to increase taxes on the region. Farmers were unable to grow crops, and any food that could be purchased was too expensive for the starving Bengalis. The company also forced farmers to grow indigo and opium, as they were much more profitable than inexpensive rice. Without large rice stocks, people were left with no food reserves, and the ensuing famine killed 10 million Bengalis.
The President, Directors and Company of the Bank of the United States, commonly known as the First Bank of the United States, was a national bank, chartered for a term of twenty years, by the United States Congress on February 25, 1791.
From 1810 to 1917, the U.S. federal government subsidized mission and boarding schools.
By 1885, 106 Indian schools had been established, many of them on abandoned military installations. Using military personnel and Indian prisoners, boarding schools were seen as a means for the government.
US INDIAN BOARDING SCHOOL HISTORY
The truth about the US Indian boarding school policy has largely been written out of the history books. There were more than 350 government-funded, and often church-run, Indian Boarding schools across the US in the 19th and 20th centuries. Indian children were forcibly abducted by government agents, sent to schools hundreds of miles away, and beaten, starved, or otherwise abused when they spoke their native languages.
Beginning with the Indian Civilization Act Fund of March 3, 1819 and the Peace Policy of 1869 the United States, in concert with and at the urging of several denominations of the Christian Church, adopted an Indian Boarding School Policy expressly intended to implement cultural genocide through the removal and reprogramming of American Indian and Alaska Native children to accomplish the systematic destruction of Native cultures and communities. The stated purpose of this policy was to “Kill the Indian, Save the Man.”
Between 1869 and the 1960s, hundreds of thousands of Native American children were removed from their homes and families and placed in boarding schools operated by the federal government and the churches. Though we don’t know how many children were taken in total, by 1900 there were 20,000 children in Indian boarding schools, and by 1925 that number had more than tripled.
The U.S. Native children that were voluntarily or forcibly removed from their homes, families, and communities during this time were taken to schools far away where they were punished for speaking their native language, banned from acting in any way that might be seen to represent traditional or cultural practices, stripped of traditional clothing, hair and personal belongings and behaviors reflective of their native culture. They suffered physical, sexual, cultural and spiritual abuse and neglect, and experienced treatment that in many cases constituted torture for speaking their Native languages. Many children never returned home and their fates have yet to be accounted for by the U.S. government.
How Boarding Schools Tried to ‘Kill the Indian’ Through Assimilation
Native American tribes are still seeking the return of their children.
How the US stole thousands of Native American children
For decades, the US took thousands of Native American children and enrolled them in off-reservation boarding schools. Students were systematically stripped of their languages, customs, and culture. And even though there were accounts of neglect, abuse, and death at these schools, they became a blueprint for how the US government could forcibly assimilate native people into white America
Blistering, bleeding, ice cold baths: The ‘moral’ treatment of patients in America’s first ‘progressive’ private asylum founded 200 years ago
- Philadelphia Quakers founded in 1813 the first private mental hospital in the United States, The Asylum for Persons Deprived of the Use of Their Reason
- This was in response to the way mental institutions at the time were treating their patients by allowing visitors to pay to watch them restrained behind bars
- The Quaker asylum practiced a progressive method of care that came to be known as ‘moral treatment’, but some of its manifestations could be cruel
The Trail of Tears was an ethnic cleansing and forced displacement of approximately 60,000 people of the “Five Civilized Tribes” between 1830 and 1850 by the United States government.
The Cherokees moved through the Trail of Tears for six months, starting from October 1838.
The Cherokees went through a painful journey with little to no food, water, or other kinds of supplies. On the Trail of Tears, their numbers dwindled. They lost their people to starvation, dehydration, and most especially disease. The Northern Route that most of the Cherokees favored as the more practical choice did not fare well for them. Despite being given wagons and horses by the U.S. government, the trip had still been difficult as the weather made the routes impassable for wagons. This forced the elderly, women, and children to walk the snow-covered path between the Ohio and Mississippi rivers.
The Cherokees who traveled the water route also struggled.
It was summer when the U.S. government released some detainees. These Cherokee groups journeyed through boats on the water route, but the water levels were too low, forcing them to move by foot instead. The summer heat and drought made the trek even more excruciating. Disease spread through the people, costing them three to five lives each day. To this day, historians are unsure of exactly how many lives were lost in the Trail of Tears, but they estimate that almost one-fifth of the Cherokees did not make it.
Between 1830 and 1850, the U.S. government forced the Cherokee, the Choctaw, and other tribes off their ancestral lands with deadly force in what’s become known as the Trail of Tears.
Throughout the 1830s, President Andrew Jackson ordered the forced removal of tens of thousands of Native Americans from their homelands east of the Mississippi River. This perilous journey to designated lands in the west, known as the Trail of Tears, was fraught with harsh winters, disease, and cruelty.
The name came to encompass the removal of all five tribes that occupied the southeastern United States.
All tribes incurred thousands of deaths and all experienced the sorrow of being ousted from their ancestral homelands. Today, many historians view Jackson’s actions as nothing short of ethnic cleansing.
The invention of electricity is a long and complex history that dates back to ancient times. The Greeks discovered static electricity around 600 BC when Thales rubbed objects and found out that static electricity could be created this way .
However, the first major breakthrough in electricity occurred in 1831 when British scientist Michael Faraday discovered the basic principles of electricity generation . He observed that he could create or “induce” electric current by moving magnets inside coils of copper wire. Building on the experiments of Franklin and others, Faraday’s discovery laid the foundation for modern electrical technology .
Immanuel Nobel emigrates from Sweden to reach St Petersburg, Russia, with his family 1842.
Painting Tim Tompkins – PaintHistory.com
Alfred Nobel 16 yrs of age. 1850, Source: Wikimedia Commons.
Alfred (left), a teenager and the younger of his two elder brothers, Ludvig, photographed in St. Petersburg, probably around thware end of the 1840s.
Nikolai Nikolajewitsch Sinin [Nikolai N. Zinin] Organic Chemist and University Professor, 1812-1880, Alfred Nobel’s teacher
Ascanio Sobrero, 1812-1888, discoverer of Nitroglycerine.
Italian chemist and assistant to Alfred Nobel’s training teacher Professor J. T. Pelouze in Paris.
The official declarations of war occurred during five separate military conflicts, starting in 1812 and, most recently, in 1942.
Known as the “second war of independence,” the War of 1812 was America’s first military test as a sovereign nation. President James Madison, angered at Great Britain’s refusal to respect America’s neutrality in the ongoing conflict between Great Britain and France, asked Congress to declare war on its former colonial overlord.
July 23, 1829 – William Austin Burt, of the United States, invents and patents the typewriter, at the time called the typographer.
THE 1839 CORINGA CYCLONE
(Image credit: K M Asad /Getty Images)
The Coringa cyclone made landfall at the port city of Coringa on India’s Bay of Bengal on Nov. 25, 1839, whipping up a storm surge of 40 feet (12 m), according to NOAA’s Atlantic Oceanographic and Meteorological Laboratory Hurricane Research Division. The hurricane’s wind speeds and category are not known, as is the case for many storms that took place before the 20th century. About 20,000 ships and vessels were destroyed, along with the lives of an estimated 300,000 people.
1840-1842: The First Opium War -Great Britain flooded the country with opium, causing an addiction crisis. The Qing Dynasty banned the drug, and a military confrontation resulted. British forces shut down Chinese ports, and Hong Kong was handed over to them.
Great Famine Ireland
One of the most famous famines in history, the Great Famine, was caused by a devastating potato disease. 33% of the Irish population relied on the potato for sustenance, and the onset of the disease in 1845 triggered mass starvation that lasted until 1853. The large Catholic population was suppressed by British rule and left unable to own or lease land or hold a profession. When the blight struck, British ships prevented other nations from delivering food aid. Ireland experienced a mass exodus, with upwards of 2 million people fleeing the country, many to the United States. At its conclusion in 1853, 1.5 million Irish were dead, and an additional 2 million had emigrated. In total, the population of Ireland shrunk by a resounding 25%.
The 1846 War with Mexico started as a land dispute. In 1836, Texas won independence from Mexico to become the Republic of Texas, but Mexico never relinquished its claim on that land. So when the United States annexed Texas in 1845, tensions escalated between the northern and southern neighbors. When President James Polk sent U.S. troops to patrol the Rio Grande border, the Mexican Army attacked, giving Polk the justification he needed to ask Congress to declare war.
The American Medical Association
The American Medical Association (AMA) was organized in 1847 in Philadelphia through the efforts of Nathan Davis and Nathaniel Chapman primarily to deal with the lack of regulations and standards in medical education and medical practice. Some mental hospital superintendents became active members. Cordial relations between the two groups continued, and members of each attended the others’ meetings.
It All Began at Sutter’s Mill in 1848: An Overview of the California Gold Rush
On January 8, 1848, James W. Marshall, overseeing the construction of a sawmill at Sutter’s Mill in the territory of California, literally struck gold. His discovery of trace flecks of the precious metal in the soil at the bottom of the American River sparked a massive migration of settlers and miners into California in search of gold. The Gold Rush, as it became known, transformed the landscape and population of California.
Arriving in covered wagons, clipper ships, and on horseback, some 300,000 migrants, known as “forty-niners” (named for the year they began to arrive in California, 1849), staked claims to spots of land around the river, where they used pans to extract gold from silt deposits.
Prospectors came not just from the eastern and southern United States, but from Asia, Latin America, Europe, and Australia as well. Improvements in steamship and railroad technology facilitated this migration, which dramatically reshaped the demographics of California. In 1849, California established a state constitution and government, and formally entered the union in 1850.
Life as a Forty-Niner
Though migration to California was fueled by gold-tinted visions of easy wealth and luxury, life as a forty-niner could be brutal. While a small number of prospectors did become rich, the reality was that gold panning rarely turned up anything of real value, and the work itself was back-breaking.
The lack of housing, sanitation, and law enforcement in the mining camps and surrounding areas created a dangerous mix. Crime rates in the goldfields were extremely high. Vigilante justice was frequently the only response to criminal activity left unchecked by the absence of effective law enforcement. As prospectors dreaming of gold poured into the region, formerly unsettled lands became populated, and previously small settlements, such as the one at San Francisco, exploded.
A forty-niner panning for gold in the American River, 1850
As competition flared over access to the goldfields, xenophobia and racial prejudice ran rampant. Chinese and Latin American immigrants were routinely subjected to violent attacks at the hands of white settlers and miners who adhered to an extremely narrow view of what it meant to be truly “American.”
Illustration depicting Chinese gold prospectors during the Gold Rush
As the state government of California expanded to oversee the booming population, widespread nativist (anti-immigrant) sentiment led to the establishment of taxes and laws that explicitly targeted immigrants, particularly Chinese immigrants.
Violence across the Land
As agriculture and ranching expanded to meet the needs of the hundreds of thousands of new settlers, white settlers’ violence toward Native Americans intensified. Peter Hardeman Burnett, the first governor of California, openly declared his contempt for the native population and demanded its immediate removal or extinction. Under Burnett’s leadership, the state of California paid bounties to white settlers in exchange for Indian scalps. As a result, vigilante groups of miners, settlers, and loggers formed to track down and exterminate California’s native population, which by 1890 had been almost completely decimated.
Though the Gold Rush had a transformative effect on California’s landscape and population, it lasted for a surprisingly brief period, from 1848 to 1855. It did not take long for gold panning to turn up whatever gold remained in silt deposits, and as the extraction techniques required to mine for gold became increasingly complex, gold mining became big business. As the mining industry exploded, individual gold-diggers simply could not compete with the level of resources and technological sophistication of the major mining conglomerates.
The orphan trains operated between 1854 and 1929, relocating about 200,000 children.
The Orphan Train movement was an effort to transport orphaned or abandoned children from cities on the United States East Coast to homes in the newly settled Midwest.
The movement was created in 1853 by Protestant minister Charles Loring Brace, founder of the Children’s Aid Society of New York City. The orphan trains ran from 1854 to 1929, delivering an estimated 200,000 to 250,000 orphaned or abandoned children to new homes.
The orphan train movement was the forerunner of the modern American foster care system and led to the passage of child protection and health and welfare laws. The children were placed on display in local train stations, and placements were frequently made with little or no investigation or oversight.
The orphan train movement was based on the theory that the innocent children of poor Catholic and Jewish immigrants could be rescued and Americanized if they were permanently removed from depraved urban surroundings and placed with upstanding Anglo-Protestant farming families.
The Second Opium War, also known as the Second Anglo-Sino War, the Second China War, the Arrow War, or the Anglo-French expedition to China, was a colonial war lasting from 1856 to 1860, which pitted …
Immanuel Nobel returns to Sweden and the creditors of his insolvent workshop in St Petersburg plead to his son Ludvig to continue the operations.
The American Mafia, commonly referred to in North America as the Italian American Mafia, the Mafia, or the Mob, is a highly organized Italian American criminal society and organized crime group. The o…
Internal Revenue Service; Agency overview; Formed: July 1, 1862
The Ludvig Nobel Engineering Works Company was formed in St Petersburg.
Ludvig Nobel in a letter to his brother, Robert: ”Petroleum has a bright future!”
·Robert Nobel starts the Lamp and lamp oil warehouse, Aurora, in Finland.
The 6 most surprising reactions to Abraham Lincoln’s death
An illustration of Lincoln’s death. Universal History Archive/Getty Images
The assassination of Abraham Lincoln is widely accepted today as an American tragedy. But it wasn’t always that way.
When the news of Lincoln’s death, 150 years ago today, first reached the public, the reactions were as varied and visceral as the reactions to his life and career. Many people mourned — some even sought out his bloodied clothing and other relics. But others, in both the North and South, celebrated and reveled in the president’s death. And many people simply didn’t believe it was real.
Historian Martha Hodes examines the actual public response in her book Mourning Lincoln, delving into hundreds of letters and diaries from the time. She paints a far more complicated picture of Lincoln’s death than the one we know.
“Most of the books I’d read about Lincoln or the Civil War made these blanket statements that the nation was in mourning,” Hodes says. But the world was far more complex than that. “The Union and Confederacy were terrible antagonists, and victory or defeat didn’t repair that.”
1) Many people dismissed Lincoln’s death as just another rumor
Another illustration of Lincoln on his deathbed.
Today, we all know the bare facts about Lincoln’s assassination. Late on April 14, 1865, Abraham Lincoln was shot at Ford’s Theatre by John Wilkes Booth. On the 15th, Lincoln died. But at the time, many people didn’t even know that.
“LINCOLN IS ALIVE & WELL”
The advent of the telegraph made it relatively easy to transmit information in 1865, but Hodes notes that “rumors were still faster than the telegraph.” Because of that, it was difficult to know if the assassination had really happened. Soldiers joked about “Madam Rumor” if they didn’t take the idea seriously, and if they did, it was jumbled with other rumors, like ones that claimed General Grant had died or that Secretary of War Edwin Stanton had been killed. That problem was even worse in the South, where telegraph lines had been ravaged by the war.
Some newspapers reported on Lincoln’s assassination quickly, but it took a while for the truth to spread and be confirmed. Hodes quotes one soldier in Ohio who said the news “could be traced to no reliable source.” Making things worse, some telegraph messages were sent claiming “Lincoln is alive & well.”
Even Lincoln’s son Tad didn’t know the truth right away. The night of the assassination, he was seeing a play at Grover’s Theatre. War Department clerk James Tanner was seeing the same play, and he reported that the president had been assassinated. However, somebody else shouted that it was just a rumor being spread by pickpockets.
2) Some Southerners gleefully celebrated. But others mourned.
An (optimistic) allegory of reconciliation between the North and South.
It shouldn’t be surprising that some people celebrated Lincoln’s death. He was the symbol of a war that had just ripped apart the nation, and he was a victor whose Southern opponents were still suffering the stings of their loss. Even so, the gleeful reactions could be shocking.
Confederate lawyer Rodney Dorman called the killer “a great public benefactor” and felt relieved at Lincoln’s assassination. (In his diary, he spelled Lincoln’s name “Lincon” to emphasize the “con” he felt Lincoln was.) Some Confederates also hoped that Lincoln’s death might change the course of the war and, as one wrote, “produce anarchy in Yankeedom.” Hodes quotes one teen who took to her diary: “Hurrah!” she wrote. “Old Abe Lincoln has been assassinated.”
“ALAS! FOR MY POOR COUNTRY”
That said, celebrations weren’t universal in the South. Other reports paint a different picture — one Union soldier wrote that throughout Virginia, people mourned Lincoln’s death. Similarly, one former slave owner wrote “Alas! for my poor Country” in his diary, recording a much more charitable response to the president’s death. Others simply ignored it, since they were busy rebuilding a crippled South.
The reaction was further divided in the South by race. “It was very starkly divided between black Southerners and white Southerners,” Hodes says. Black Southerners genuinely mourned Lincoln’s death, while white Southerners felt something closer to a sense of reprieve from Union dominance, though they still worried about the future of the Confederate states.
3) Some Northerners reveled in Lincoln’s death — with ugly results
A political cartoon depicting the “Copperheads.”
It’s easy to forget that Northerners didn’t universally support the Civil War or Abraham Lincoln. There were the “Copperheads,” vocal Northern Democrats who weren’t loyal to the Union cause. “People often write about the North loving Lincoln,” Hodes says. “Lincoln’s Northern antagonists were a minority, but a significant and vocal minority.”
Hodes found records of some “good Union Men” who celebrated in the privacy of their homes when Lincoln was killed. Publicly, cities like Trenton that had a reputation for anti-Union sentiment still mourned. But privately, some reveled. A company in New Jersey “secretly rejoiced” at the news of Lincoln’s death. A woman in Bloomington, Indiana, held a “grand dinner” to celebrate. A Minnesota woman wanted to celebrate at a ball.
Yet others reacted violently against anti-Lincoln sentiment in the North. “Lincoln’s mourners wanted to put forward a universal grief, ” Hodes says, “but the fact that people in their own midst were also celebrating Lincoln’s assassination was galling and infuriating to them.” In April 1865, an anti-Lincoln man was tarred and feathered in Swampscott, Massachusetts. Dissent wasn’t tolerated well in the North, when it was publicly expressed.
4) Many of Lincoln’s mourners felt like they’d lost their best friend
Lincoln was eagerly greeted by former slaves in Richmond.
On April 4, 1865, shortly after the fall of Richmond to Union forces, President Lincoln arrived in the city with his son Tad. According to reports of the time, overjoyed African Americans circled the president. It was their praise in particular that was interesting: they called Lincoln “father” or “master Abraham.” Later they called him their “best friend.”
“THE BEST FRIEND I EVER HAD”
That sentiment carried over after Lincoln’s assassination. In a letter to the New York Afro-African newspaper, one writer said that Booth had “murdered their best friend,” and the sentiment of Lincoln as a friend was a common one among Lincoln’s sympathizers. Freedpeople called Lincoln “the best friend I ever had” and “their best earthly friend.” The New Orleans Black Republican newspaper called Lincoln the “greatest earthly friend of the colored race.”
That intimate relationship with Lincoln — one of friendship — is hard to imagine today. Hodes says that at the time, the term “best friend” had a sense more akin to familial closeness than we might think today — a best friend was someone with a truly intimate connection. For many at the time, especially the African Americans who felt they benefited from Lincoln’s friendship, it was the most appropriate term.
5) Lincoln’s death was wrapped up in Easter celebrations, and that prompted a religious response
An illustration of Lincoln’s funeral procession on April 25, 1865.
Lincoln was shot on Good Friday in 1865 and died the next day. That made Easter a particularly dramatic experience, even as rumors about Lincoln’s death circulated across the country.
Churches across the country were faced with the difficult task of celebrating Easter and mourning the death of Lincoln at the same time. As one woman wrote: “Everybody here seems trying to remember that God will bear us safely through this new & terrible trial, if we are faithful.” These meditations forced broader grappling with how God could allow Lincoln to die, and what that might mean for pious behavior on Earth. Some people even believed that Lincoln’s assassination was the only fitting capstone to the violent war.
Of course, it wasn’t only the country’s Christian majority who grappled with the assassination of Lincoln. Hodes quotes a synagogue in California whose members were “stricken with sorrow” but resolutely agreed to bow to “divine decree.”
6) People jostled for relics of his life
A postcard showing the bed where Lincoln died, perfectly preserved. This postcard is from 1931, but people sought Lincoln relics as soon as news broke that he’d been shot.
It makes sense that people would want souvenirs from Lincoln’s life. But his death prompted scrapbooking, thievery, and fervid collection. “It was part of the aftermath of immediate shock,” Hodes says. “Even after Lincoln was buried, people still wanted these relics.”
One War Department employee took a blood-soaked towel from the president, while another clerk picked up Lincoln’s bloody collar. Others made immediate pilgrimages to the theater and place where Lincoln died to see history in the making. Like those who travel to Ground Zero today, the appeal was elemental — as one woman said, it made everything “so vivid.” Hodes recalls that some people even asked for their letters of mourning to be returned to them, so they could have a record of their initial reaction to Lincoln’s death.
In a way, that points to why reactions to Lincoln’s assassination still matter today. Hodes was first drawn to the project after 9/11, when she realized how varied reactions to a traumatic event can be. “As someone who was in New York on that day, I have an entire carton of what I would call relics,” she says. “I bought postcards of the Twin Towers, and I had newspaper headlines, and I didn’t even do anything with them. It’s about preserving history in which you participated as an individual.” That’s what people did after Lincoln’s death, and we continue to have reactions that are just as powerful — and complicated — today.
Birth Certificates are Federal Bank Notes
The American Bank Note Bldg. American Bank Note Company is a subsidiary of American Banknote Corporation and products range from currencies and credit cards to passports, driver’s licenses, and birth certificates.
In the USA, citizens have never obtained their original Birth Certificates — what they possess is a copy. Furthermore, these ‘copies’ have a serial number on them, issued on special Bank Bond paper and authorized by “The American Bank Note Company”. (More on this later).
The original birth or naturalization record for every U.S. Citizen is held with Washington, D.C. and the property and assets of every living U.S. Citizen is pledged as collateral for the National Debt.
Every citizen is given a number (*the red number on the Birth Certificate) and each live birth is reported to be valued at 650,000 to 750,000 Federal Reserve dollars in collateral from the Fed. Hence the saying “we are owned by the system”. LITERALLY.
FACT: The government recognizes two distinct classes of citizens: a state Citizen and a federal citizen. Learn the difference now.
“There are hundreds of thousands of sovereigns in the United States of America but I am not one of them. The sovereigns own their land in “allodium.” That is, the government does not have a financial interest in the their land. Because of this they do not need to pay property tax (school tax, real estate tax). Only the powers granted to the federal government in the Constitution for the United States of America define the laws that they have to follow.
This is a very small subset of the laws most of us have to follow. Unless they accept benefits from or contract with the federal government, they do not have to pay Social Security tax, federal income tax, or resident individual state income tax.
They do not need to register their cars or get a driver’s license unless they drive commercially. They will not have to get a Health Security Card. They can own any kind of gun without a license or permit. They do not have to use the same court system that normal people do. ~
*See below for information re: State Citizenship (How to become a…)
“Unbeknownst to most people, the class termed “US citizen” did not exist as a political status until 1866. It was a class and “political status” created for the newly freed slaves and did not apply to the people inhabiting the states of the union who were at that time state Citizens.” ~ Mr. Richard James, McDonald, former law enforcement, California
Now do the math.
If indeed 317 million US citizens are worth an average of $700,000 in collateral for the US debt, that would mean the US is worth roughly 222 Trillion dollars.
Your birth certificate is really a bank note, which means you, the citizen are what is known in the stock market as a commodity.
DEFINITION of ‘Commodity’
- A basic good used in commerce that is interchangeable with other commodities of the same type. Commodities are most often used as inputs in the production of other goods or services. The quality of a given commodity may differ slightly, but it is essentially uniform across producers. When they are traded on an exchange, commodities must also meet specified minimum standards, also known as a basis grade.
- Any good exchanged during commerce, which includes goods traded on a commodity exchange.
So if you didn’t catch it the first time, I will repeat myself at the risk of being redundant. Us citizens are owned by The United States Federal reserve, a note in the stock exchange, being traded as a commodity.
The note is printed by The American Bank Note Company. Who are they?
Background on American Bank Note Company: The following was printed to the editor by the New York times.
AMERICAN BANK NOTE COMPANY, NEW-YORK, Saturday, Dec. 2, 1865.NEW YORK TIMES
“To the Editor of New-York Times:
The attention of this company has been drawn to a paragraph from the Washington Star of the 30th ult., giving an account of some examination made at the Treasury Department as to the evidence of the surreptitious impression of a genuine plate, used by the counterfeiter in printing the backs of the spurious one hundred dollar compound interest note, having been taken after or before the plates and dies prepared by the American Bank Note Company were delivered by that company to the Treasury Department.
This paragraph states this “investigation shows that the counterfeits are made up from a plate surreptitiously obtained from this (the American Bank Note) Company.”
This statement is supported by a supposed demonstration, which to those familiar with the business will need no refutation, and which would amuse the counterfeiter, whoever he may be. And as no other reason, and no actual proof is pretended to support this imputation upon the security of plates and dies in the custody of the company, the paragraph might be left to every careful reader’s own correction.
I beg, however, the favor of stating, through your paper, that this company is ready to submit to, and give every aid to, any examination into the matter which the Treasury Department may desire; and that “experts” or plain men, upon the most thorough scrutiny, will have no doubt that the surreptitious impression was made from the genuine plate in the hands of the government, and after it was changed from the condition in which it was delivered by this company. Very respectfully, your obedient servant,
~GEO. W. HATCH, President.”
American Bank Note Company is a subsidiary of American Banknote Corporation and ABnote Group: http://abnote.com/
Today, following a variety of financial transformations, the American Banknote Corporation produces a wide variety of secure and official documents. With operations world wide its products range from currencies and credit cards to passports, driver’s licenses, and birth certificates.
How does this work?
Why didn’t you learn this in school? According to many sources, including an excerpt from researcher Brian Kelly,
“When the UNITED STATES declared bankruptcy, pledged all Americans as collateral against the national debt, and confiscated all gold, eliminating the means by which you could pay, it also assumed legal responsibility for providing a new way for you to pay, and it did that by providing what is known as the Exemption, an exemption from having to pay for anything. In practical terms, though, this meant giving each American something to pay with, and that \”something\” is your credit.
Your value to society was then and still is calculated using actuarial tables and at birth, bonds equal to this \”average value\” are created. I understand that this is currently between one and two million dollars. These bonds are collateralized by your birth certificate which becomes a negotiable instrument. The bonds are hypothecated, traded until their value is unlimited for all intents and purposes, and all that credit created is technically and rightfully yours.
In point of fact, you should be able to go into any store in America and buy anything and everything in sight, telling the clerk to charge it to your Exemption account, which is identified by a nine-digit number that you will recognize as your Social Security number without the dashes. It is your EIN, which stands for Exemption Identification Number.”
The FEDERAL RESERVE BANK is not owned and controlled by the U.S. Government, rather it is owned and operated by International families. You are owned by foreigners bankers and your (physical) body is a collateral bond that has been issued on all your future earnings, your children’s future earnings, etc. Were you taught this in school?
If this is indeed the case, how do you change your current status?
The fact is, thousands of citizens have already changed their ‘slavery’ status by means of relinquishing the agreement and reverting to their Sovereign status (inalienable rights), the status you were born with such as constitutional rights of life, liberty, and property which are not transferable and, thus, are termed inalienable.
Questions to ask yourself:
- What type of person are you?
- What class of citizen are you?
- Can you change this status?
The Uniform Commercial Code (UCC) and the law of contracts make it difficult to conduct business in the USA. Now that you know your Birth certificate(registration of birth) is nothing more than a Contract with the government, what will you do to change this? Did they tell you you were signing a contract. Did you know that you didn’t even have to register the birth?
The first published account of what became the Mafia in the United States dates to the spring of 1869. The New Orleans Times reported that the city’s Second District had become overrun by “well-known and notorious Sicilian murderers, counterfeiters and burglars, who, in the last month, have formed a sort of general co-partnership or stock company for the plunder and disturbance of the city.”
Ludvig Nobel is awarded the privilege to use the imperial Russian herald, the double eagle.
The first automobile suitable for use on existing wagon roads in the United States was a steam-powered vehicle invented in 1871 by Dr. J.W. Carhart, a minister of the Methodist Episcopal Church, in Racine, Wisconsin.
In the 1870s, exhibitions of so-called “exotic populations” became popular throughout the western world. Human zoos could be seen in many of Europe’s largest cities, such as Paris, Hamburg, London, Milan as well as American cities such as New York
THE UNITED STATES BECAME A FOREIGN CORPORATION IN 1871
By: TLB Staff Writer | David-William June 24, 2016
Every day the amount of people learning that the “united states of America,” the representative Republic that it was, died when the Southern States abandoned Congress forever in 1861, is increasing. Along with the pain of finding that you’ve been steeped in lies throughout your entire education, there’s still plenty more to cry about as you start putting the pieces together.
Seriously, after this Article, you’ll feel the temptation to learn enough more to go slap your kids’ teachers and professors around until they promise to make some kind of change or resign. Trust me, if we’re sending our kids to public school today, we’re not doing them any good. Sorry for the harsh words, but when we find out that the schools are really indoctrinating our kids to be slaves, and stupid ones at that, yet we do it anyway, it’s time to rethink our philosophy.
When we fail to plan, we plan to fail. How can we say our kids have great teachers while they’re teaching them bull$#!+ and lies? That’s what we think when we don’t know the truth either!! OUCH!!! Today’s teachers are shoveling propaganda during an age of information with the real facts thumbing them in the eyes. There are no excuses and there’s no reason to hear any. If it comes from public schools, public agencies, public MEDIA, or public servants, it’s a stinking lie! Even the term public means private if it’s connected to THE UNITED STATES.
A MESSAGE FOR ANYONE WHO IS CRAZY ENOUGH TO CLAIM U.S. CITIZEN STATUS
“Then, by passing the Act of 1871, Congress formed a corporation known as THE UNITED STATES. This corporation, owned by foreign interests, shoved the organic version of the Constitution aside by changing the word ‘for’ to ‘of’ in the title. Let me explain: the original Constitution drafted by the Founding Fathers read: ‘The Constitution for the united states of America.’ [note that neither the words ‘united’ nor ‘states’ began with capital letters] But the CONSTITUTION OF THE UNITED STATES OF AMERICA’ is a corporate constitution, which is absolutely NOT the same document you think it is. First of all, it ended all our rights of sovereignty [sui juris]. So you now have the HOW, how the international bankers got their hands on THE UNITED STATES OF AMERICA.”
“As an instrument of the international bankers, the UNITED STATES owns you from birth to death. It also holds ownership of all your assets, of your property, even of your children. Think long and hard about all the bills taxes, fines, and licenses you have paid for or purchased. Yes, they had you by the pockets. If you don’t believe it, read the 14th Amendment. See how ‘free’ you really are. Ignorance of the facts led to your silence. Silence is construed as consent; consent to be beneficiaries of a debt you did not incur. As a Sovereign People we have been deceived for hundreds of years; we think we are free, but in truth we are servants of the corporation.”
THE UNITED STATES is the Vatican! They’re the Jesuits and the Zionists all rolled into one big rat’s nest! They’re the ones running the Pentagon, murdering everyone. They’re not about God! They work for Satan. They have been pumping your heads full of septic stew and they’re getting you to foot the bill! They’re the ones who own the I.R.S., so wake up.
The Grisly Story of One of America’s Largest Lynching
The Biggest Mass Lynching In The United States, L.A., 1871
“Eleven men lay dead. The other eight Italian prisoners spared, either because they had not been found or someone had vouched for their innocence. For those who still had not seen enough, arrangements were made for small groups of ten to fifteen spectators each to pass through the prison to witness the vigilantes’ handiwork.” (Persico)
Bodies of some of the lynched Italian Americans arranged for public viewing. Illustrated American, April 4, 1891.
In Italy, public opinion clamored for justice and the vindication of Italy’s national honor.
The Prime Minister of Italy demanded punishment of the murderers, and the US refused. Italy’s Prime Minister ordered the Italian ambassador home from Washington.
Rumors now began to spread of Italian warships headed for the American coast. Confederate veterans from Tennessee and the Shelby Rifles of Texas volunteered to fight for Old Glory against Rome. Uniontown, Alabama, offered fifteen hundred men. From Georgia the War Department received an offer of “a company of unterrified Georgia rebels to invade Rome, disperse the Mafia and plant the Stars and Stripes on the dome of St. Peter’s.”
On May 5, a New Orleans grand jury convened to look into the murders. The grand jury’s report concluded some of the jurors had been subject to “a money influence to control their decision.” As a result, six men indicted for attempted bribery and one person convicted (serving a short sentence).
As for the lynch mob, the grand jury decided that it “embraced several thousand of the first, best and even the most law-abiding citizens of the city … in fact, the act seemed to involve the entire people of the parish and the City of New Orleans. …” And after thoroughly examining the subject the grand jury reported there was no reason to indict anybody for the lynching.
However, due to the diplomatic sparring with Italy, the Department of Justice looked into the incident. After reviewing the eight-hundred-page transcript, U.S. attorney, William Grant, reported that evidence against the defendants was “exceedingly unsatisfactory” and inconclusive. And later, all charges outstanding against those who had survived the prison massacre were dropped.
Public sentiment across the nation viewed that justice had triumphed — in the streets of New Orleans, if not in its courts. Few did disagree; The Nation magazine said we had “cut a sorry figure before the civilized world.” But New Orleans was content. “The hand of the assassin has been stayed,” the New Delta reported. “The Mafia is a thing of the past.”
President Harrison would have ignored the New Orleans carnage had the victims been black. The Italian government made that impossible. It broke off diplomatic relations and demanded an indemnity that the Harrison administration paid. Harrison’s 1891 State of the Union called congress to protect foreign nationals — though not black Americans — from mob violence.
To appease Italian-Americans, Harrison gave a Columbus Day proclamation in 1892. The myth of Columbus as “the first immigrant” still told today.
Phrased in a mockery of the Italian American dialect, “Who killa da chief?” remained a taunt used to insult Italian Americans in New Orleans well into the 1990s.
Ludvig Nobel and Peter Bilderling build a factory for weapons production in Izhevsk in the Urals.
Robert Nobel looks for walnut wood for Ludvig’s rifles in the forests of the Caucasus.
The monopoly-contract system for renting oil deposits is abolished in favor of public auction of contracts in the Caucasus.
Colorless, tasteless, and almost odorless crystalline chemical compound, an organochloride. Originally developed as an insecticide, it became infamous for its environmental impacts. DDT was first synthesized in 1874 by the Austrian chemist Othmar…
DDT exposure in people
Exposure to DDT in people likely occurs from eating foods, including meat, fish, and dairy products. DDT exposure can occur by eating, breathing, or touching products contaminated with DDT. DDT can convert into DDE, and both persist in body and environment. In the body, DDT converts into several breakdown products called metabolites, including the metabolite dichlorodiphenyldichloroethene (DDE). The body’s fatty tissues store DDT and DDE. In pregnant women, DDT and DDE exposure can occur in the fetus. Both chemicals can be in breast milk, resulting in exposure to nursing infants.
How DDT Affects People’s Health
Human health effects from DDT at low environmental doses are unknown. Following exposure to high doses, human symptoms can include vomiting, tremors or shakiness, and seizures. Laboratory animal studies show DDT exposure can affect the liver and reproduction. DDT is a possible human carcinogen according to U.S. and International authorities.
Levels of DDT and DDE in the U.S. Population
CDC scientists measured DDT and its metabolite DDE in the serum (a clear part of blood) of 1,956 participants aged 12 years and older who took part in CDC’s National Health and Nutrition Examination Survey (NHANES) during 2003–2004. (National Report on Human Exposure to Environmental Chemicals and Updated Tables). By measuring DDT and DDE in the serum, scientists can estimate the amounts of these chemicals entering people’s bodies.
- A small portion of the population had measurable DDT. Most of the population had detectable DDE. DDE stays in the body longer than DDT, and DDE is an indicator of past exposure.
- Blood serum levels of DDT and DDE in the U.S. population appear to be five to ten times lower than levels found in smaller studies from the 1970s.
Finding measurable amounts of DDT and DDE in serum does not imply that the levels of these chemicals cause an adverse health effect. Biomonitoring studies of serum DDT and DDE provide physicians and public health officials with reference values. These reference values can determine whether higher levels of DDT and DDE exposure in people are present than in the general population. Biomonitoring data also help scientists plan and conduct research on exposure and health effects.
Consequences of DDT Exposure Could Last Generations
Scientists found health effects in grandchildren of women exposed to the pesticide
Hailed as a miracle in the 1950s, the potent bug killer DDT (dichloro-diphenyl-trichloroethane) promised freedom from malaria, typhus and other insect-borne diseases. Manufacturers promoted it as a “benefactor of all humanity” in advertisements that declared, “DDT Is Good for Me!” Americans sprayed more than 1.35 billion tons of the insecticide—nearly 7.5 pounds per person—on crops, lawns and pets and in their homes before biologist Rachel Carson and others sounded the alarm about its impacts on humans and wildlife. The fledgling U.S. Environmental Protection Agency banned DDT in 1972.
Friends and family often ask Barbara Cohn, an epidemiologist at Oakland’s Public Health Institute, why she studies the effects of the long-banned pesticide. Her answer: DDT continues to haunt human bodies. In earlier studies, she found that the daughters of mothers exposed to the highest DDT levels while pregnant had elevated rates of breast cancer, hypertension and obesity.
Cohn’s newest study, on the exposed women’s grandchildren, documents the first evidence that DDT’s health effects can persist for at least three generations. The study linked grandmothers’ higher DDT exposure rates to granddaughters’ higher body mass index (BMI) and earlier first menstruation, both of which can signal future health issues
“This study changes everything,” says Emory University reproductive epidemiologist Michele Marcus, who was not involved in the new research. “We don’t know if [other human-made, long-lasting] chemicals like PFAS will have multigenerational impacts—but this study makes it imperative that we look.” Only these long-term studies, Marcus says, can illuminate the full consequences of DDT and other biologically disruptive chemicals to help guide regulations.
In the late 1950s Jacob Yerushalmy, a biostatistician at the University of California, Berkeley, proposed an ambitious study to follow tens of thousands of pregnancies and measure how experiences during fetal development could affect health into adolescence and adulthood. The resulting Child Health and Development Study (CHDS) tracked more than 20,000 Bay Area pregnancies from 1959 to 1966. Yerushalmy’s group took blood samples throughout pregnancy, at delivery and from newborns while gathering detailed sociological, demographic and clinical data from mothers and their growing children.
Cohn took the helm of the CHDS in 1997 and began to use data from the children, then approaching middle age, to investigate potential environmental factors behind an increase in breast cancer. One possibility was exposure in the womb to a group of chemicals classified as endocrine disruptors—including DDT.
Human endocrine glands secrete hormones and other chemical messengers that regulate crucial functions, from growth and reproduction to hunger and body temperature. An endocrine-disrupting chemical (EDC) interferes with this finely tuned system. Many pharmaceuticals (such as the antibiotic triclosan and the antimiscarriage drug diethylstilbestrol) act as EDCs, as do industrial chemicals like bisphenol A and polychlorinated biphenyls, and insecticides like DDT. “These chemicals hack our molecular signals,” says Leonardo Trasande, director of the Center for the Investigation of Environmental Hazards at New York University, who was not involved in the study.
Thawing tens of thousands of CHDS samples from decades earlier, Cohn and her colleagues measured the DDT in each mother’s blood to determine the amount of fetal exposure. In a series of studies, they connected this level to the children’s midlife heart health and breast cancer rates.
Fetuses produce all their egg cells before birth, so Cohn suspected these children’s prenatal DDT exposure might also affect their own future children (the CHDS group’s grandchildren). With an average age of 26 this year, these grandchildren are young for breast cancer—but they might have other conditions known to increase risk of it striking later.
Using more than 200 mother-daughter-granddaughter triads, Cohn’s team found that the granddaughters of those in the top third of DDT exposure during pregnancy had 2.6 times the odds of developing an unhealthy BMI. They were also more than twice as likely to have started their periods before age 11. Both factors, Cohn says, are known to raise the risk of later developing breast cancer and cardiovascular disease. These results, published in Cancer Epidemiology, Biomarkers, and Prevention, mark the first human evidence that DDT’s health threats span three generations.
Akilah Shahib, 30, whose grandmother was in the CHDS study and who participated in the current work, says the results provide a stark reminder that current health problems may stem from long-ago exposures. “DDT was a chemical in the environment that my grandparents had no control over,” she says. “And it wasn’t the only one.
To Andrea Gore, a toxicologist at the University of Texas at Austin, the new results are nothing short of groundbreaking. “This is the first really robust study that shows these kinds of multigenerational outcomes,” says Gore, who was not involved in the study.
Laboratory studies, including one by Cohn in 2019, have shown that DDT and other EDCs can lead to effects across generations via epigenetic changes, which alter how genes turn on and off. Cohn is also investigating the multigenerational effects of other endocrine disruptors, including BPA and polyfluorinated compounds.
Such research also highlights the need for long-term testing to determine a chemical’s safety, N.Y.U.’s Trasande says. Gore agrees, arguing that regulators should require more rigorous testing for endocrine-disrupting effects; while scientists learn about the specific mechanisms by which EDCs influence health over multiple generations, she adds, they should routinely look for hallmarks of such influences in lab toxicology studies.
Videos from the late 40s show DDT being sprayed on children in a pool, to showcase how ‘safe’ it was for use in the home and around humans. Source: https://www.youtube.com/watch?v=kbcHszMCIJM&t=28s
As Trasande puts it: “This study reinforces the need to make sure that this doesn’t happen again.”
The first risk of DDT is because it concentrates in biological systems, particularly in body fat. This means that DDT, once it enters the body gets stored as fat, which leads it to be able to build up and become toxic.
The first risk of DDT is because it concentrates in biological systems, particularly in body fat. This means that DDT, once it enters the body gets stored as fat, which leads it to be able to build up and become toxic.
Over 25,000 barrels of toxic industrial waste were found off the Southern California Coast, yet the amount of chemicals like DDT still leaking from these barrels remain unclear. Surfrider chats with Scripps professor to learn the history of this waste and if there’s a current risk to marine life and Southern Californians.
Back in 2011, UC Santa Barbara scientist David Valentine and his team of researchers used a deep sea robot (~3000 feet deep) to get the first photos of barrels of toxic waste littering the seafloor, yet the finding failed to stay in the headlines. In October 2020, Rosanna Xia and the LA Times published a story to help expose decades of dumping barrels full of toxic waste off of the Southern California coast, roughly halfway between Palos Verdes and Catalina. This resurgence of awareness and interest helped spark action, and in April 2021 a group of scientists shared results of an extensive mission focused on mapping the area. They counted more than 25,000 barrels that they believe may contain DDT-laced industrial waste.
For an update on the issue we reached out to Dr. Lihini Aluwihare, Professor of Marine Chemistry and Geochemistry at UC San Diego’s Scripps Institution of Oceanography, with some questions.
1- Many people have been shocked by the headlines and stories about DDT and other chemicals that were dumped off the Southern CA coast many years ago. Could you start with a brief synopsis of what was dumped, when it was dumped and what researchers have been able to document in the last few years?
Historical shipping logs show that industrial companies in Southern California used the basin as a dumping ground until 1972, when the Marine Protection, Research and Sanctuaries Act, also known as the Ocean Dumping Act, was enacted.
The document that many of us have been working with is the Chartrand 1985 report. The LA Times piece by Rosanna Xia is also a must read in this context. In the Chartrand report, the California Salvage Company and the Pacific Ocean Disposal company in particular, are highlighted as being involved in dumping industrial waste.
Between 1947-1961 “acid sludge” from Montrose Chemical company (“2000-3000 gallons per day of DDT acid sludge” containing ~0.5-1% DDT) and “caustic and acid waste” from oil refineries were dumped. The locations where this dumping occurred were not specified, but later, when the LA regional water quality control board began monitoring dumping activities, specific deepwater dumping sites were designated – first “dumpsite 1” (just northwest of Catalina Island in the Santa Monica Basin region), and later “dumpsite 2” (east of Catalina Island, 10 miles offshore of LA, in the San Pedro Basin). Dumpsite 2 is the one that was examined in 2011 and 2013 by Dave Valentine’s group from UCSB. Their video footage shows barrels on the seafloor. In addition to visually recording barrels (~60) on the seafloor, this study quantified DDT and its degradation products in sediments around the barrels. Barrel contents were not directly sampled and it is still unknown what is actually inside the barrels. Nearly 15 years prior to that, I. Venkatesan, working with Alan Chartrand, visited both dumpsites (1 & 2) and detected DDT in the sediments at both locations. This latter study suggested that the amount of DDT directly dumped at deep sites was similar to the amount accumulating off the Palos Verdes shelf (runoff from wastewater treatment facilities off White’s Point), thereby doubling the local Southern California inventory of DDT in sediments. These estimates were supported by the Valentine study as well. Nearly 10 years after the Valentine study, “dumpsite 2” was surveyed by Eric Terrill’s group at Scripps Institution of Oceanography with sonar technology mounted on robots, and his group, using machine learning technology applied by Sophia Merrifield (also at Scripps), identified ~25,000 barrel like targets from his data, spread over 36,000 acres. (Scroll down for a map of the area and a photo of the deep sea robot courtesy of Scripps Institution of Oceanography / UC San Diego)
2- Are any of these containers actively leaking toxic chemicals like DDT? How do you know?
I don’t know the exact answer to that – there are halos around some containers in the sediments as indicated by the photos provided by UCSB that could be interpreted to suggest that something did come out. The reports state that barrels were actually compromised before getting dumped to ensure they would sink. Also, the paper by Valentine’s UCSB group referred to potential signs (on some of the barrels) that something could be leaking out, such as some precipitation and bubbles on the outside, but this is not something that I have direct evidence for.
3- Are any of these Chemicals Dangerous for the Marine Life?
The primary ecological concern is for resident marine mammals that consume high quantities of locally sourced fish containing these contaminants. As a result, the chemicals bioaccumulate in marine mammals. In addition, because of their high blubber content, they also concentrate DDT and other fat-soluble chemicals, thus biomagnifying these contaminants as well. The work being done at The Marine Mammal Center (Sausalito, CA) suggests that the high body-burden of DDTs make sea lions more susceptible to cancer.
In terms of risk to human health, consuming considerable quantities of DDT, such as by eating contaminated fish is of primary concern. Swimming in the water is not a concern.
Given that we have multiple sources of DDT to the local marine food web, we are not yet in a position to say whether these deep dumpsites are contributing to the current DDT body burden of local biota. However what is known is that the DDT body burden in local marine mammals in our region is among the highest values reported in the world.
4- Are there any current or future threats to ocean recreation or human health in Southern CA and the Channel Islands from these dumped barrels?
There is little threat from recreating in the ocean – summer is here! There are fish consumption advisory data that can help people determine how much local fish is too much. More could certainly be done to monitor this. Furthermore, DDT is not the only man-made chemical to consider when discussing seafood consumption, for example our region also shows elevated levels of PCBs and many emerging contaminants (I direct you to the work of Eunha Hoh at San Diego State University, for example, as well as local offices of State agencies). In coastal locations, the DDT and PCB body burdens do show a decreasing trend, but that doesn’t guarantee that future perturbations won’t release some of these chemicals back into the water column, so we need to understand what those activities might/could be. Raising awareness of how these chemicals move around will help with that.
Additionally, natural toxins like domoic acid, produced by a very specific group of microscopic algae, are also very relevant in this context and can be quite devastating to ecosystems up and down the coast. However, there are many excellent efforts to monitor those blooms or “outbreaks”, including through SCCOOS (Southern California Coastal Ocean Observing Systems).
5- What are the impacts to seafood and marine life?
The toxicological impacts are outside my areas of expertise but I think I answered some of this in question #3. The amount of DDT in relevant seafood is what is most important when considering consumption. There is the fantastic study by Gulland et al. on the potential impact of DDT on sea lion health. Additionally, endocrine disruption in local bottlenose dolphins may also be related to DDT content (Trejo, Hoh et al: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6301072/ )
6- What types of cleanup or other mitigation efforts have been discussed?
The focus at Scripps Institution of Oceanography has really been on the following question: what do we need to know about this problem in order for others to manage and minimize the risk that these deep dumpsites pose to ecosystem and human health? First, we have to delineate the aerial extent of the contamination at these deep dumpsites – that is the first step. Next, we want to determine whether DDT and other chemicals from the deep dump sites specifically, are entering the marine food web- as I noted before, DDT is present in coastal sediments as well so delineating the major sources of DDT to the local food web is important from a management point of view. Plus, what else might be down there in those barrels? Another interesting question with regard to cleanup is whether natural remediation is going on – a paper by Dave Valentine’s group at UCSB addressed this for a sediment associated bacterium. From a management perspective, how carefully do agencies have to manage future activities in the San Pedro Basin that could disturb sediments or alter the sediments? Answering these questions is crucial for determining how to manage the risk going forward.
7- Is there a possibility that cleanups efforts could do more harm than good by disturbing or dispersing the chemicals?
That is why we are pressing for more research – to address this issue. DDT, its degradation products, and other chemicals detected in these sediments are “sticky” and will adsorb onto solid surfaces like the sediments at the bottom of the ocean or detrital particles (the waste products of the food web) in the water. So, we expect to find these chemicals accumulating in the sediment. Trying to “alter” the sediment content of DDT, especially in these deep waters, will not be trivial. However, we don’t know if these sediments are being mobilized by natural processes that release DDT containing particles into the water. This is another area of research that we are pursuing.
8- What are government agencies doing to address this issue?
We have seen engagement on this issue at all levels of local and state government, and even the Congressional Subcommittee on Water, Oceans and Wildlife of the House Natural Resources Committee recently held a hearing on this important topic. There is clear engagement from our local congress members and also Senator Feinstein. In addition, the Cal EPA appears to be very much engaged in determining the appropriate next steps. I am not sure what exact steps are being discussed right now but what the research community really needs is funding because the accessibility of this deep location requires the use of advanced technology.
9- Is there anything that the general public could do to help elevate this issue and find solutions?
It is important for the public to learn what recreational activities pose risks and what don’t – the public should continue to enjoy the ocean and recreate in it. They can also pay more attention to what they are eating and where it is coming from; there are resources that can provide this information. In terms of ecosystems, it is important for the public to recognize that our activities have lasting consequences for ecosystem health. For example, incidences of cancer in sea lions is apparently quite rare, and so, the study showing cancer in our local sea lions should be something we pay attention to in terms of the environmental impact of our activities – trying to mitigate those impacts on ecosystems going forward is important. I think this problem is also a classic example of how human alteration of the environment can have lasting effects even after the active sources of the disturbance (including pollutants) have been “switched off.” Hopefully the public can see the parallels to other sources of disturbance such as adding CO2 to the atmosphere and the demand for plastics, to name a few.
This is an issue that needs to be studied and addressed so we can manage this area carefully and oversee activities that might remobilize the industrial waste. Furthermore, what we learn here could inform how we approach problems at other deep ocean locations. As such, supporting the continued production of scientific data regarding the risks posed by these dumpsites is something the public could do. Also paying attention to what commercial activities are planned in these regions going forward and ensuring that those plans include a risk assessment
The book ‘Silent Spring’ by Rachel Carson was published over 50 years ago and revealed the hazards of DDT to human and wildlife health . Currently The book ‘Silent Spring’ by Rachel Carson was published over 50 years ago and revealed the hazards of DDT to human and wildlife health . Currently the World Health Organization (WHO) and the Gates Foundation promote the use of DDT in developing countries in Africa for malaria control. The current day potential hazards of DDT exposures need to now be considered in light of the transgenerational actions of DDT . The various transgenerational diseases promoted by DDT include obesity, kidney disease and ovarian disease . The long-term health and economic effects on survivors  and subsequent generations  now needs to be considered with respect to the number of lives saved from malaria. A more careful risk-benefit consideration of the use of DDT is needed since other options exist with less toxic shorter half-life pesticides. The primary objective of the following discussion is to incorporate the concept of transgenerational inheritance.The current day potential hazards of DDT exposures need to now be considered in light of the transgenerational actions of DDT . The various transgenerational diseases promoted by DDT include obesity, kidney disease and ovarian disease . The long-term health and economic effects on survivors  and subsequent generations  now needs to be considered with respect to the number of lives saved from malaria. A more careful risk-benefit consideration of the use of DDT is needed since other options exist with less toxic shorter half-life pesticides. The primary objective of the following discussion is to incorporate the concept of transgenerational inheritance
mages of death permeate the opening chapter of Rachel Carson’s Silent Spring: birds no longer sing, fish float on the surface of ponds, and flowers droop by the roadside. This grim portrait, first published 50 years ago, gripped readers, urging them to take a closer look at the ever-growing use of synthetic organic herbicides and pesticides that began toward the end of World War II.
When these chemicals first came on the market, they appeared almost miraculous. In 1939 Swiss chemist Paul Müller had shown that the compound DDT (dichlorodiphenyltrichloroethane) eradicated insect populations. DDT sprayed from airplanes eliminated the malaria- and dengue fever–carrying mosquitoes that sickened and even killed American soldiers in the Pacific theater. Wartime successes led to postwar applications, with chemical companies selling DDT to farmers to reduce crop loss to insects. Tropical nations used the compound to continue the fight against the mosquitoes that spread malaria. In the 1950s the chemical industry created new pesticides and herbicides, such as chlordane and heptachlor for killing insects and 2,4-D to control sagebrush growth on western U.S. roadsides.
But all this good news had a dark side, which Carson detailed in her 1962 book. At the time Carson was already well known for her best seller The Sea Around Us. She had worked for the U.S. Fish and Wildlife Service, first as a biologist and then as editor in chief of the organization’s publications, before turning to writing full time. Carson’s genius lay in pulling together already existing data from many areas and synthesizing it to create the first coherent account of the effects persistent chemicals had on the landscape and its inhabitants. She then turned these facts into a gripping narrative.
The book did not condemn all chemicals; Carson instead described herself as opposing the “reckless and irresponsible poisoning of the world that man shares with all other creatures.” She showed how alfalfa-sprayed DDT moved through alfalfa-fed hens, into the eggs, and finally into the egg-eating humans. She told readers how certain chemicals like dieldrin, used to kill agricultural pests, were stored in the body. The plants, animals, and people in her book formed an interconnected web—all affected by the introduction of these compounds.
Rachel Carson on Hawk Mountain, Pennsylvania, 1945.Rachel Carson Council, Inc.
Chemical manufacturers fought back. Velsicol Chemical Corporation, which produced chlordane and heptachlor, threatened Carson’s publisher with a lawsuit over her claims about its pesticides. Others pointed out Carson had failed to mention chemical successes: increased farm production and control over such insect-spread diseases as malaria. The Monsanto Company published an essay, “The Desolate Year,” the month after Silent Spring’s appearance to counter Carson’s apocalyptic vision. The essay showed that without pesticides and herbicides farmers would be unable to produce enough food for a growing population and that preventable diseases would continue to kill people. Many years later Robert A. Roland, president of the Chemical Manufacturers Association from 1978 to 1993, said that the chemical industry had made a mistake in not properly engaging with Carson and the environmental issues she highlighted.
The U.S. government responded to the book with an investigation into the ecological and health effects of these synthetic chemicals. President John F. Kennedy ordered his Science Advisory Committee to review pesticide and herbicide experiments. The committee published its findings a year later and suggested a link between DDT and liver damage, but its scope was limited. Regardless, Carson’s book changed how people saw the world around them and kick-started the modern environmental movement. Before Silent Spring’s publication regulation governing pesticides focused on their effectiveness; after publication, effects on the environment also had to be considered.
In the following decade the widespread use of DDT became the main focus of the environmental movement. In 1972 the U.S.Senate banned the compound, persuaded by new environmental organizations like the Environmental Defense Fund, which linked DDT to the thinning of eggshells and the subsequent near disappearance of the osprey from Long Island, New York. The government banned chlordane completely in 1988, and now strictly limits the use of heptachlor to the control of fire ants.
Carson’s book did not argue for prohibition of all synthetic chemicals. Instead, it showed that new methods, like the release of sterilized insects combined with a judicious use of pesticides, could control pests, limit their growing resistance to the chemicals used against them, and reduce the harmful effects on humans and the environment. Silent Spring was more than prophecy: it changed how governments, industry, and agriculture respond to ills that will always be with us.
The book ‘Silent Spring’ triggered an environmental movement and as such we have known the toxic effects of chemical agriculture, basically from the very beginning. We have suffered both massive environmental damage, disease and pest resistance, and human health issues.
Silent Spring is a 1962 environmental science book by Rachel Carson. The book documented the detrimental effects on the environment—particularly on birds—of the indiscriminate use of pesticides. Carson accused the chemical industry of spreading disinformation and public officials of accepting industry claims unquestioningly.
In the late 1950s, Carson turned her attention to conservation, especially environmental problems that she believed were caused by synthetic pesticides. The result was Silent Spring (1962), which brought environmental concerns to the American public. Silent Spring was met with fierce opposition by chemical companies, but it spurred a reversal in national pesticide policy, led to a nationwide ban on DDT for agricultural uses, and inspired an environmental movement that led to the creation of the U.S. Environmental Protection Agency.
In 1996, a follow-up book, Beyond Silent Spring, co-written by H.F. Van Emden and David Peakall, was published. In 2006, Silent Spring was named one of the 25 greatest science books of all time by the editors of Discover Magazine.
Research and Writing
In the mid-1940s, Carson became concerned about the use of synthetic pesticides, many of which had been developed through the military funding of science after World War II. The United States Department of Agriculture’s 1957 fire ant eradication program, which involved aerial spraying of DDT and other pesticides mixed with fuel oil and included the spraying of private land, prompted Carson to devote her research, and her next book, to pesticides and environmental poisons. Landowners in Long Island filed a suit to have the spraying stopped, and many in affected regions followed the case closely. Though the suit was lost, the Supreme Court granted petitioners the right to gain injunctions against potential environmental damage in the future, laying the basis for later environmental actions.
The impetus for Silent Spring was a letter written in January 1958 by Carson’s friend, Olga Owens Huckins, to The Boston Herald, describing the death of birds around her property resulting from the aerial spraying of DDT to kill mosquitoes, a copy of which Huckins sent to Carson. Carson later wrote that this letter prompted her to study the environmental problems caused by chemical pesticides.
The Washington, D.C. chapter of the Audubon Society actively opposed chemical spraying programs and recruited Carson to help publicize the U.S. government’s spraying practices and related research. Carson began the four-year project of Silent Spring by gathering examples of environmental damage attributed to DDT. She tried to enlist essayist E. B. White and a number of journalists and scientists to her cause. By 1958, Carson had arranged a book deal, with plans to co-write with Newsweek science journalist Edwin Diamond. However, when The New Yorker commissioned a long and well-paid article on the topic from Carson, she began considering writing more than the introduction and conclusion as planned; soon it became a solo project. Diamond would later write one of the harshest critiques of Silent Spring.
As her research progressed, Carson found a sizable community of scientists who were documenting the physiological and environmental effects of pesticides. She took advantage of her personal connections with many government scientists, who supplied her with confidential information on the subject. From reading the scientific literature and interviewing scientists, Carson found two scientific camps; those who dismissed the possible danger of pesticide spraying barring conclusive proof and those who were open to the possibility of harm and were willing to consider alternative methods, such as biological pest control.
By 1959, the USDA’s Agricultural Research Service responded to the criticism by Carson and others with a public service film, Fire Ants on Trial; Carson called it “flagrant propaganda” that ignored the dangers that spraying pesticides posed to humans and wildlife. That spring, Carson wrote a letter, published in The Washington Post, that attributed the recent decline in bird populations—in her words, the “silencing of birds”—to pesticide overuse. The same year, the 1957, 1958, and 1959 crops of U.S. cranberries were found to contain high levels of the herbicide aminotriazole and the sale of all cranberry products was halted. Carson attended the ensuing FDA hearings on revising pesticide regulations; she was discouraged by the aggressive tactics of the chemical industry representatives, which included expert testimony that was firmly contradicted by the bulk of the scientific literature she had been studying. She also wondered about the possible “financial inducements behind certain pesticide programs”.
Research at the Library of Medicine of the National Institutes of Health brought Carson into contact with medical researchers investigating the gamut of cancer-causing chemicals. Of particular significance was the work of National Cancer Institute researcher and founding director of the environmental cancer section Wilhelm Hueper, who classified many pesticides as carcinogens. Carson and her research assistant Jeanne Davis, with the help of NIH librarian Dorothy Algire, found evidence to support the pesticide-cancer connection; to Carson the evidence for the toxicity of a wide array of synthetic pesticides was clear-cut, though such conclusions were very controversial beyond the small community of scientists studying pesticide carcinogenesis.
By 1960, Carson had sufficient research material and the writing was progressing rapidly. She had investigated hundreds of individual incidents of pesticide exposure and the resulting human sickness and ecological damage. In January 1960, she suffered an illness which kept her bedridden for weeks, delaying the book. As she was nearing full recovery in March, she discovered cysts in her left breast, requiring a mastectomy. By December that year, Carson discovered that she had breast cancer, which had metastasized. Her research was also delayed by revision work for a new edition of The Sea Around Us, and by a collaborative photo essay with Erich Hartmann. Most of the research and writing was done by the fall of 1960, except for a discussion of recent research on biological controls and investigations of some new pesticides. However, further health troubles delayed the final revisions in 1961 and early 1962.
Marijuana is the most popular federally illegal drug in the United States. In 2019, approximately 48.2 million people, or 18 percent of Americans, used it at least once.7
In some states, the legalization of marijuana has made the drug more socially acceptable, minimizing potential unwanted consequences such as cannabis dependence.
The following drugs are commonly prescribed to treat pain:
Painkillers’ prescription status doesn’t mean they aren’t addictive. Painkiller addiction can develop even from moderate use.
Most people who become addicted to prescription painkillers don’t notice they are dependent until they try to stop taking them.
Painkillers can be obtained illegally without a prescription, which may provide a pathway for unwanted addiction.
Overdose deaths from cocaine are rising. More than 16,000 Americans died from a cocaine overdose in 2019.8
Crack cocaine is cheaper and more intense than regular cocaine and poses a great risk for substance use disorder (SUD).
In 2016, about 948,000 Americans reported using heroin in the past year. This number has been on the rise since 2007.9
Heroin’s severe withdrawal symptoms make quitting heroin use challenging.
Treating heroin addiction usually requires a combination of therapy and medications to manage withdrawal symptoms and cravings.
People who inject drugs, such as heroin, are at risk of contracting or spreading viral illnesses such as hepatitis or HIV from sharing needles.
Approximately 12.5 percent of Americans use benzodiazepines. This is equal to around 30.5 million people.10
About 2 percent of Americans misuse benzodiazepines. But only 0.2 percent met the criteria for a benzodiazepine use disorder.
Common benzodiazepines include:
These drugs are mood-regulating. They are used to manage conditions like stress and anxiety.
People who develop an addiction to these drugs often aren’t aware they are dependent until they can’t function normally without them.
DDT is a synthetic insecticide belonging to a class of chemicals called organochlorides. Also known as dichloro-diphenyl-trichloroethane, it is one of the most effective yet controversial synthetic insecticides ever developed.1 While incredibly effective at controlling mosquitoes, it also has devastating environmental impacts.2 Today, DDT is banned in much of the world, but it is still used to control malaria in some areas where the benefits might outweigh the risks.
What Is DDT and Why Was It Banned?
DDT was first synthesized in 1874, however, it wasn’t until 1939 that scientist Paul Müller discovered its effectiveness as an insecticide. Müller was awarded the Nobel Prize in 1948 for his discovery and DDT use became fairly widespread.
DDT was initially used by the military during World War II to control malaria, typhus, body lice, and bubonic plague. It was sprayed on the interior walls of houses and even carried in small cans by soldiers for personal insect protection. DDT aerosol bombs became an easy way to control disease in the field.
World War II propaganda poster featuring a soldier applying DDT.t. John Parrot
After the war, DDT use continued to soar. In 1945, DDT was released for commercial sale and became widely used for insect control in crop and livestock production, institutions, homes, and gardens. In the early 1950s, due to its success in decreasing mosquito populations, the World Health Organization launched the Global Malaria Eradication Program.1
DDT was so widely used because it was effective, relatively inexpensive to manufacture, and lasted a long time in the environment.3 An estimated 5,000 metric tons of DDT were used for disease vector control in 2005, although current levels of DDT production and storage are often difficult to track.4
While initially DDT was an incredibly effective insecticide, its widespread use quickly led to the development of resistance by many insect pest species.2 Since the introduction of DDT for mosquito control in 1946, DDT resistance at various levels has been reported from more than 50 species of anopheline mosquitoes, including many that spread malaria.4 After decades of use, evidence of the pesticide’s declining benefits and suspected environmental and toxicological effects were becoming causes for concern.
Risk to Humans
Human exposure to DDT occurs primarily through inhalation after spraying or ingestion from food sources. Once in the body, DDT collects primarily in fat tissue and remains there for quite some time.5 According to a study on DDT persistence, it would take between 10 and 20 years for DDT to disappear from an individual if exposure would totally cease, but its primary metabolite, DDE, would possibly persist throughout the lifespan of the individual.3
Being at the top of the food chain, humans ingest DDT from food crops that were sprayed with it in the field. In addition, DDT accumulates in the fat of fish and mammals who were also exposed to DDT in the environment. That DDT is then passed up the food chain.
This long-term bioaccumulation, as it is called, means that over time, levels of DDT are highest in humans and larger predatory animals, especially meat-eating birds like eagles, hawks, condors, etc.
July 1945. A group of men from Todd Shipyards Corporation run their first public test of an insecticidal fogging machine at Jones Beach in New York. As part of the testing, a 4-mile area was blanketed with the DDT fog.
While the EPA lists DDT as a class B carcinogen; this classification comes mainly as a result of animal studies as opposed to human studies. According to the Environmental Protection Agency, class B carcinogens are those that show some evidence of causing cancer in humans but at present it is far from conclusive.7
There is currently no evidence in humans that DDT causes cancer or reproductive problems; however, workers exposed to large concentrations during application have reported a variety of neurological effects.8 DDT exposure side effects such as vomiting, tremors or shakiness, and seizures have been reported.5
Environmental Impact of DDT
The persistence of DDT in the environment, one of its most useful insecticidal properties, was also one of its most concerning in regards to its environmental impact.
Scientists began voicing concerns about the environmental effects of DDT as early as the 1940s; however, it wasn’t until Rachel Carson wrote the book “Silent Spring” in 1962 that widespread public concern began to grow.
In her book, Carson detailed how a single drop of DDT applied to crops lingered for weeks and months, even after a rainfall. And as an insecticide, it was incredibly efficient, killing not only mosquitoes but a host of other insects as well. Considered a general insecticide, DDT kills everything from beetles and lice to fleas and houseflies. For insect-eating birds, this poses a significant problem.1 “Silent Spring” detailed the reduction in some songbird populations as a possible result of widespread insecticide use.
In addition, long-term buildup of DDT in meat-eating birds like the bald eagle resulted in reproductive complications as well. High concentrations of DDT in these birds caused thinning of their eggshells and breeding failure. As a direct result of eggshell thinning, these eggs were easily broken, causing a significant population decline.9 The work Carson did in highlighting the dangers of DDT is often called the beginning of the modern environmental movement.
As public concern grew, numerous environmental organizations joined the fight. In 1967, the Environmental Defense Fund, the National Audubon Society, the National Wildlife Federation, the Izaak Walton League, and other environmental groups joined the movement to restrict the use of DDT through legal action at both the local and federal levels. Due to the initiation of numerous court proceedings regarding the use of DDT, on October 21, 1972, the Federal Environmental Pesticides Control Act was enacted.10
As a result of growing environmental concerns, numerous countries around the world came together as part of the United Nations Environment Programme to restrict the usage of a broad selection of persistent organic pollutants (POPs), a group that includes DDT. This treaty is known as the Stockholm Convention on POPs, which only allowed use of DDT for controlling malaria.11
Many people mistakenly assume that DDT is no longer in use. However, the Stockholm Convention on POPs did not ban its use entirely.
Currently, numerous countries around the globe, from Africa to China, either use DDT to fight malaria or have reserved the right to do so in the future.
The use of DDT continues to be a controversial topic even today. Malaria is a significant risk to human health in many areas of the world. While some areas have had good results controlling mosquito populations with other insecticides, others have been unsuccessful.
DDT and Malaria
Malaria is a serious and sometimes fatal disease caused by parasite-infected mosquitoes when they feed on humans. According to the Centers for Disease Control, in 2020 an estimated 241 million cases of malaria occurred worldwide and 627,000 people died, mostly children in the African Region.12
While malaria is found in many countries, it is most commonly diagnosed in sub-Saharan Africa and South Asia.12
Many countries where malaria is common have switched from DDT to other insecticides, however, not all of these attempts have been successful. In areas where malaria is undeterred by other insecticides, DDT may be the only way to control mosquito populations and reduce fatalities from malarial disease.
Cost, ease of use, species of mosquito, and chemical resistance all play a part in a country’s decision on which insecticide to choose, however, the final factor is whether or not the chosen product works to reduce disease.
One concern regarding the use of DDT in certain areas of the world is that no country exists in isolation. When sprayed outdoors, DDT does not stay in a localized area. Traces of DDT have been recovered from dust known to have drifted over 600 miles and in water melted from Antarctic snow. From the soil your food grows in, to the rain falling in your backyard, DDT is still detectable today in microscopic amounts.6
At the tail end of World War II, Irma Materi left Seattle for Korea to join her husband, Joe, an army colonel. The couple and their new baby moved into a white stucco house with a red tile roof—and scores of nooks and crannies for insects to hide in. Fortunately, Materi had packed just the thing to address the problem: a grenade-shaped canister containing the new insecticide DDT, which she sprayed on high shelves, in dark corners, and under furniture and cabinets.
A few days later the Materis received a visit from the army’s DDT detail: a lieutenant and a dozen men wearing white jumpsuits with large spray packs strapped to their backs. As Materi scrambled to carry the family’s clothes, linens, utensils, and food to safety, the team doused the home with a solution of kerosene and DDT. Materi later wrote about the experience:
The army detail’s enthusiastic use of DDT is a familiar part of the pesticide’s postwar story. So too are the stock images from the late 1940s and 1950s that show American housewives drenching their kitchens with DDT and children playing in the chemical fog emitted by municipal spray trucks. Newspaper articles and advertisements called DDT “magic” and a “miracle”—which is likely why Materi took DDT along on her transpacific journey.
But articles and ads also cautioned that DDT was a substance to be handled with care—which is why there were limits to how much DDT Materi would tolerate in her home and why some Americans, such as Georgia farmer Dorothy Colson, wouldn’t tolerate DDT at all. Colson spent the late 1940s trying to launch a movement against DDT, convinced it was making Americans sick and killing off chicks and bees. To her it made no difference that the pesticide had—as the 1948 Nobel Prize committee put it—saved the “life and health of hundreds of thousands” from such insect-borne diseases as typhus, malaria, yellow fever, and plague. Where such diseases didn’t threaten people, Colson argued, DDT wasn’t worth the risk.
Materi’s anger at the overuse of DDT and Colson’s outright rejection of the pesticide don’t typically appear in the story of the now-infamous chemical. From history books to the recent news reports on Zika virus, accounts of DDT remind us that postwar Americans were so enamored with the pesticide’s potential to kill disease-carrying and crop-destroying pests that they quickly and enthusiastically embraced it. Nary a question about its toxicity or long-term risks was raised, we are led to believe, until Rachel Carson outlined them in her 1962 book, Silent Spring. DDT’s history is frequently invoked not only because the powerful pesticide was considered one of the most important technologies to emerge from the war but because we still struggle to control deadly and debilitating insect-borne diseases—Zika being the latest case in point.
We simplify the pesticide’s story because that stripped-down version of DDT’s history buttresses our understanding of the past. DDT’s powerful ability to control disease made the pesticide a hero of the war, and its development by American scientists still stands as proof that the United States earned its superpower status in large part through its scientific and technological prowess. The public’s acceptance of the chemical captures American postwar faith in scientific expertise. And its vilification by environmentalists serves as a powerful and lasting illustration of the baby boomer generation’s antiauthoritarian turn. Here, in short, is one chemical whose story illustrates some of the most profound social and cultural shifts in 20th-century U.S. history.
But what happens if we tell DDT’s story differently, by leaving out the Nobel committee, for example, and instead tuning into what Materi, Colson, and like-minded Americans were saying during the pesticide’s heyday? This side of the story reveals a public more circumspect about DDT than many of the experts and authorities promoting its use. This version reveals a citizenry accustomed to thinking of pesticides as life-threatening poisons, worried about this new insecticide’s toxicity, and uncertain about how to interpret assurances of its safety. This story shows that many Americans needed to be convinced that DDT was a technology worth adapting to peacetime use. And this story calls into question the claim that the nation wholeheartedly accepted DDT. Government agencies (some more than others) did turn to it with increasing frequency, and so did our industrializing agricultural industry. The American public bought into DDT, too—but more unevenly than we’ve been led to believe.
The American public first heard about DDT in early 1944, when newspapers across the country reported that typhus, “the dreaded plague that has followed in the wake of every great war in history,” was no longer a threat to American troops and their allies thanks to the army’s new “louse-killing” powder. In an experiment in Naples, Italy, American soldiers dusted more than a million Italians with DDT, killing the body lice that spread typhus and saving the city from a devastating epidemic. It was a dramatic debut.
DDT quickly began to work its magic on the home front, as well. In the seasons that followed, newspapers reported that in test applications across the United States the pesticide was killing malaria-carrying mosquitoes throughout the South and preserving Arizona vineyards, West Virginia orchards, Oregon potato fields, Illinois cornfields, and Iowa dairies—and even a historic Massachusetts stagecoach with moth-infested upholstery. A peacetime vision for DDT bloomed: here was a wartime discovery that would prevent human disease and protect victory gardens, commercial crops, and livestock from infestations as it turned schools, restaurants, hotels, and homes into more comfortable, pest-free places for people and their pets.
In October 1945 National Geographic ran a feature on the “world of tomorrow,” in which transatlantic rockets would speed mail delivery, stores would sell frozen foods from exotic lands, clothes would be coated in waterproof plastic, and electronic “tubes” and “eyes” would do everything from stacking laundry to catching burglars. Health and medicine would be vastly improved, too, thanks to sterilizing lamps, penicillin, and, of course, DDT. “But scientists are treading with caution in their use of DDT, because it kills many beneficial insects as well,” the authors added. In an accompanying photo—an image that’s now iconic—a truck-mounted fog generator coated a New York beach in DDT as young children played nearby. The pesticide had halted a typhus epidemic in Naples, the caption read, but it “also has a drawback—it kills many beneficial and harmless insects, but it does not kill all insect pests.” Crops, flowers, and trees dependent on pollinators could die off, as could birds and fish.
In wartime DDT had saved lives, and it had done so by inflicting easily accepted collateral damage. In peacetime, however, DDT’s negative effects on beneficial insects, birds, and fish warranted renewed consideration. National Geographic merely alluded to this; others were more direct. When the War Production Board first released DDT for sale to the public, it cautioned against “use of it to upset the balance of nature” and added that if applied to crops, DDT would leave residues that might also cause harm to humans.
What kind of harm? The problem was that no one really knew. Testing at the National Institutes of Health (NIH) and at the Food and Drug Administration (FDA) had shown that in lab animals DDT could cause tremors, liver damage, and death. Of the variety of animals tested in 1943 and 1944, monkeys seemed most resistant to DDT’s effects, mice the least. DDT suspended in oil proved more toxic than DDT dust, and the liquids DDT was dissolved in (like kerosene) often seemed more toxic than DDT itself. What was worrisome, according to FDA pharmacologist Herbert O. Calvery, was that the amount of DDT it took to produce symptoms of toxicity had no clear correlation across species; in some species it took very little, while in others it took a lot. The problem was complicated even further by the fact that when small animals ate small amounts of DDT over time, they developed poisoning symptoms normally associated with a single, large dose. Calvery concluded that although it was extremely difficult to say just how much DDT was safe for animals or humans to ingest, the safe “chronic”—or ongoing—level of DDT exposure “would be very low indeed.”
Calvery’s concerns appeared at the very end of a long, “restricted” report on insecticides issued by the Office of Scientific Research and Development in 1944. A War Department bulletin released the same month warned against spraying DDT on cattle, fowl, and fish and on waters that might be used for human consumption. It also cautioned soldiers against getting DDT-infused oil on their skin or DDT dust in their lungs, and strongly urged them not to let the pesticide “mingle” with kitchen supplies. At the same time, the insecticide in every recruit’s aerosol bomb was swapped out for DDT, and soldiers were instructed to spray or dust their mattresses and mess halls, latrines and barracks, dugouts, infirmaries, and even their uniforms. The warnings and cautions attached to army memos about DDT did yield some measures of self-protection: soldiers charged with DDT detail were given the protective gear Materi later saw on the team that entered her home. DDT was a poison, but it was safe enough for war. Any person harmed by DDT would be an accepted casualty of combat.
If DDT was harmful to humans, the methods by which it worked its harm were no clearer in peace than in combat. If anything, as time passed, DDT’s safety seemed to be unprecedented. By the fall of 1945 millions of people had come in direct contact with DDT—in Naples, North Africa, the Pacific, even throughout the southeastern United States where the chemical was sprayed in homes in an attempt to rout the last vestiges of malaria. No one displayed ill effects. The few human DDT poisonings seemed to be isolated cases associated with massive ingestion, like that among a group of starving Formosan prisoners of war who mistook DDT for flour and used it to bake bread. Not one died, though those who ate the most bread suffered lasting neurological damage.
But such cases caused little alarm. DDT was released for public sale in late 1945, at a time when insecticides were commonly known as “poisons” (or by professionals as “economic poisons” for their ability to preserve agricultural profits). Insecticides introduced in the latter half of the 19th century for commercial agriculture often contained copper, lead, and arsenic, and by the first half of the 20th century it was well known that insecticide residues on fruits and vegetables could sicken and even kill hapless consumers. This reputation was regularly reinforced by publicized cases of poisoning: Illinois women sickened by sprayed asparagus; the Montana girl poisoned by sprayed fruit; poisonings in Los Angeles traced back to excessive residues of arsenic on cabbage, pears, spinach, broccoli, and celery. There were also the tragic accidents associated with the increased presence of pest poisons in everyday life, such as the death of 47 patients at an Oregon hospital where roach powder was confused for powdered milk.
Instead of distancing themselves from poison sprays, however, by World War II more and more American consumers were bringing them home from the corner store. As Americans planted victory gardens to grow their own food, they amassed household-sized collections of agricultural poisons, including lead arsenate, calcium arsenate, nicotine sulfate, bichloride of mercury, and Bordeaux powder, a mixture of copper sulfate and lime. “Every gardener with more than a month’s experience,” noted a magazine writer in the spring of 1945, now has “a combination of powders and solutions as lethal as an arsenal.”
Insecticides, by definition, were poisons, and consumers were used to thinking of them as such despite their growing ubiquity. DDT thus posed an unparalleled paradox. It seemed to avoid so many of the downsides of the old insecticides: insects didn’t have to eat it to die but merely had to come into contact with it; it kept on killing for months after it was applied; and it killed an extraordinary range of insects at very low doses, all without causing any detectable harm to people. But for every feature that set it apart from the earlier insecticides, it was still a substance meant to kill. So how were consumers to receive reassurances of DDT’s safety in the government brochures, news articles, and ads that sang its praises?
One answer was to reject such claims, as a number of journalists and lawmakers did in DDT’s first year on the consumer market. When the pesticide was first released for sale, state officials in Missouri issued a formal warning against it, citing unknown hazards to plants, animals, and humans. Minnesota banned its sale, New Jersey restricted it, and California and New York issued decrees requiring that DDT-containing products bear the skull and crossbones indicating a dangerous poison. This last approach worried officials at the FDA and NIH. If people learned through experience that DDT could be handled with less caution than such bona-fide poisons as strychnine and bichloride of mercury—which it certainly could—they would lose their respect for the skull and crossbones as a signifier of danger.
As states struggled to regulate DDT, journalists struggled to reconcile warnings and promises. “Make no mistake about it. DDT in sufficient quantity is a poison,” announced one homemaking magazine. Sure, it slaughtered cockroaches, but “DDT presumably could send you on a death jag too,” reported another. “DDT: Handle with Care,” announced yet another publication, which went on to tell readers that DDT in substantial amounts would “attack nerve centers and the liver” and that small amounts consumed over time might “build up in the body to a fatal dose.” After all, noted one writer, that’s exactly what consuming lead and arsenic could do. DDT, “that storm center of pros and cons,” needed to be treated “as respectfully as arsenate of lead,” wrote another. DDT’s purported safety was one of the most exciting things about it, but it was also one of the hardest to believe.
So when Dorothy Colson saw planes spraying DDT over land adjacent to her family farm, it was easy for her to connect the pesticide to the problems that suddenly wouldn’t let up. In the years just after the war Colson launched a dogged investigation into DDT, writing to state agencies, manufacturers, and organizations far and wide. The literature she amassed on the pesticide indicated that it might be harmful to humans but offered no conclusive proof that it was. And the more experts she questioned, the more she was told that DDT had above all saved countless lives around the globe, all while never harming a person.
Such expert worries were no secret. Newspapers far and wide reported that the new chemical was a threat to nature. (Older agricultural chemicals, such as lead and arsenic, typically got press space only when they poisoned people.) DDT killed off beneficial bugs and had the potential to “eliminate ducks and geese,” “paralyze” sheep, “burn” plants, and spark population explosions of some pests by wiping out their natural predators. In Colson’s home state, Atlanta Constitution farm editor and radio-show host Channing Cope wrote of his experience testing DDT on his property.
“DDT will kill the bees and that means that it will kill the clover, which means, too, that it will kill off our livestock,” he warned. “It will destroy the fruit crops which are dependent on bees for pollenization! It will kill most of the flowers for the same reason and will wipe out many of our vegetables.” He concluded, ominously, that DDT “has the power to ruin us.”
But Cope had other observations to share as well. The pesticide had eliminated the bugs pestering his mules, dairy cows, Scottish terrier, cat, and pig; and it seemed to be keeping the bugs from coming in through cracks and crevices in his windows and walls. Although its downside was undeniable, he wrote that DDT was also a “great tool for our betterment.”
Cope’s ambivalence captured that of the nation as a whole. Despite their trepidation Americans were enamored with the ways in which DDT promised to improve life on the farm and at home. Unmolested by insects, dairy cattle produced more milk and steers yielded more meat. Cockroaches disappeared from cupboards, ants from the sugar, bedbugs from mattresses, and moths from rugs. Even the flies then suspected of carrying polio seemed to take the disease with them as they disappeared. DDT sales continued to climb—even as the Colsons and the Copes struggled to make sense of the chemical’s harms. And so the nation moved forward, still ambivalent: DDT production increased tenfold to more than 100 million pounds by the beginning of the 1950s (the vast majority of it used in agriculture).
But the fears didn’t fade away. In the spring of 1949 headlines across the country carried the news that DDT had found its way into the nation’s dairy supply and that the “slow, insidious poison” was building up in human bodies. The following year, and for the rest of the 1950s, DDT became a focus of congressional hearings about the safety of the food supply. FDA scientist Arnold J. Lehman testified that small amounts of DDT were being stored in human fat and accumulating over time and that, unlike with the older poisons, no one knew what the consequences would be. Physician Morton Biskind shared his concern that DDT was behind a new epidemic, so-called virus X (an epidemic later attributed to chlorinated naphthalene, a chemical in farm machinery lubricants). Pesticide-eschewing farmers, such as Louis Bromfield, testified they simply could not meet the demand for spray-free crops from Heinz, Campbell’s, A&P, and other companies—all of which were themselves trying to meet the demands of consumers worried about pesticides generally, and specifically the ubiquitous and well-publicized DDT.
By the time Rachel Carson detailed DDT’s harm to falcons, salmon, eagles, and other forms of wildlife in Silent Spring, a good number of Americans had been demanding more information about the insecticide’s ill effects for the better part of two decades. And yet to this day that’s not how we talk about DDT’s past. Instead, we tell the story of a chemical whose powers were so awe-inspiring that no one gave any thought to its downsides—at least not until they were brought to light by one renegade scientist. It’s a narrative that gave Americans a hero for the latter part of the 20th century, a female scientist and writer smart enough and brave enough to take on the establishment and win. It’s a story about the power of social movements to remake society for the better. And it’s a story of a nation reformed, able to set aside hubris for reason.
As a society we use narratives to organize our shared past into a beginning, middle, and end. The stories we tell over and over again, like that of DDT, explain how we arrived at the present, and they point to a hoped-for future. DDT was banned in the United States in 1972, a development largely credited to Carson and the environmental movement she helped inspire. But in recent reports on Zika—and in less-recent debates about malaria in developing nations—a new ending to DDT’s story took shape. In this version of events there is a responsible way to use the pesticide and a potential need for it when it comes to controlling the most intractable insect-borne diseases. In this version our considered deployment of DDT would never repeat the mistakes of the past, especially the overuse of the pesticide in agriculture. In this new ending today’s experts are more enlightened than their historical counterparts; their expertise stems in part from learning from past mistakes, and with this wisdom they determine the appropriate limits in using powerful technologies.
Maybe so. I can’t predict the future, but I can say that these competing DDT narratives neatly illustrate a problem with the past: when we as a collective remember our shared history, we pick and choose from what happened in order to build our great narratives of nation and identity. In so doing we throw out the pieces that don’t fit and come to believe there is only one true past. If this manner of storytelling is a human inevitability, then perhaps we should learn to recognize the ways selective memory shapes so many of the narratives that tell us who we think we are.
Robert Nobel buys a refinery and a plot of land in Baku.
Robert Nobel begins producing paraffin in Baku.
Rockefeller’s Standard Oil gets competition from Nobel.
Ludvig Nobel sends his works manager, Alexander Bary, to the USA to study American oil management.
Ludvig Nobel and his son, Emanuel, visit Robert in Baku.
Knowledge of oil management is obtained from the USA.
Great Backerganj Cyclone (Bangladesh, 1876) Also known as the Bengal Cyclone of 1876, the cyclone occurred Oct. 31, 1876, in Bangladesh, leading to the deaths of an estimated 200,000 people. Forming over the Bay of Bengal, the cyclone made landfall at the Meghna River Estuary.
A cyclone in the area we now know as Bangladesh destroyed the city of Chittagong. At least 200,000 people in Chittagong and its surrounding area died as a result of the cyclone.
On October 31, 1876, the community of Chittagong, in a part of India that is now in the nation of Bangladesh, experienced a powerful cyclone that swept inland up the River Meghna, part of the Ganges River’s delta. As the surge of water moved upstream into the shallower and narrower stretches of the river it rose in height until it became a monstrous wall of ocean water, thirty to forty feet high. Bangladesh, on the eastern part of the Bay of Bengal, experiences cyclones twice a year, in October and in May. These two months represent the turning points of the monsoon winds. In May they begin to move on shore and in October they move southward from the cold air mass in the north to dominate the atmosphere of India for almost six months. It seems that these turning points, because they represent for a time a mixing of the two contrasting wind systems, trigger the cyclones. The impact of this cyclone in 1876 was devastating in every way. A hundred thousand persons drowned and another hundred thousand perished from diseases or famine.
Bangladesh seems to have received more of the types of cyclones that result in high death rates than has any other country in South or Southeast Asia. One statement that gives support to this claim is based on a list of deaths from cyclones that occurred only in those countries and in those cyclones in which there were more than 5,000 deaths. The data reveals that Bangladesh featured in more than half of the countries listed. One physical factor in the environment of Bangladesh may be a contributing cause. The country’s overall low elevation makes it easy for relatively small storms to transform its coastal area into a vast sea. With regard to the future, scientists have debated the implications of global warming with respect to the nature of cyclones in the Bay of Bengal. The only tentative conclusions arrived at to date are that sea temperatures will increase and these cyclones, as a result, will likely be more intense and therefore more destructive.
The first pipes for the pipeline in Baku are ordered from Glasgow.
Ludvig Nobel lectures on oil extraction in Baku at the Imperial Russian Technical Society.
The plan for the distribution of paraffin from Baku to the Russian market is completed.
The world’s first tanker, Zoroaster, built in Sweden, is commissioned.
The limited company, Branobel, is formed by Ludvig, Robert and Alfred Nobel, Peter Bilderling and others. The share capital is 3 million roubles.
The Nobels’ employee, Engineer Alfred Törnqvist, is sent to the USA to study the American oil industry.
Wilhelm Hagelin comes to Baku as a stowaway.
Robert Nobel leaves Baku and Alfred Törnqvist becomes manager of Branobel.
1880 – First vaccine for cholera by Louis Pasteur 1885 – First vaccine for rabies by Louis Pasteur and Émile Roux 1890 – First vaccine for tetanus (serum antitoxin) by Emil von Behring 1896 – First vaccine for typhoid fever by Almroth Edward Wright, Richard Pfeiffer, and Wilhelm Kolle 1897
- 1880 – First vaccine for cholera by Louis Pasteur
- 1885 – First vaccine for rabies by Louis Pasteur and Émile Roux
- 1890 – First vaccine for tetanus (serum antitoxin) by Emil von Behring
- 1896 – First vaccine for typhoid fever by Almroth Edward Wright, Richard Pfeiffer, and Wilhelm Kolle
- 1897 – First vaccine for bubonic plague by Waldemar Haffkine
- 1921 – First vaccine for tuberculosis by Albert Calmette
- 1923 – First vaccine for diphtheria by Gaston Ramon, Emil von Behring and Kitasato Shibasaburō
- 1924 – First vaccine for scarlet fever by George F. Dick and Gladys Dick
- 1924 – First inactive vaccine for tetanus (tetanus toxoid, TT) by Gaston Ramon, C. Zoeller and P. Descombey
- 1926 – First vaccine for pertussis (whooping cough) by Leila Denmark
- 1932 – First vaccine for yellow fever by Max Theiler and Jean Laigret
- 1937 – First vaccine for typhus by Rudolf Weigl, Ludwik Fleck and Hans Zinsser
- 1937 – First vaccine for influenza by Anatol Smorodintsev
- 1941 – First vaccine for tick-borne encephalitis
- 1952 – First vaccine for polio (Salk vaccine)
- 1954 – First vaccine for Japanese encephalitis
- 1954 – First vaccine for anthrax
- 1957 – First vaccine for adenovirus-4 and 7
- 1962 – First oral polio vaccine (Sabin vaccine)
- 1963 – First vaccine for measles
- 1967 – First vaccine for mumps
- 1970 – First vaccine for rubella
- 1977 – First vaccine for pneumonia (Streptococcus pneumoniae)
- 1978 – First vaccine for meningitis (Neisseria meningitidis)
- 1980 – Smallpox declared eradicated worldwide due to vaccination efforts
- 1fo81 – First vaccine for hepatitis B (first vaccine to target a cause of cancer)
- 1984 – First vaccine for chicken pox
- 1985 – First vaccine for Haemophilus influenzae type b (HiB)
- 1989 – First vaccine for Q fever
- 1990 – First vaccine for Hantavirus hemorrhagic fever with renal syndrome
- 1991 – First vaccine for hepatitis A
- 1998 – First vaccine for Lyme disease
- 1998 – First vaccine for rotavirus
Ludvig’s son, Emanuel Nobel, becomes responsible for Branobel’s economic matters.
Tsar Alexander II is murdered in a bomb attack. Ludvig Nobel’s import of Alfred’s dynamite is stopped.
The Branobel tanker, Nordenskiöld, is ravaged by fire during loading in Baku.
THE 1881 HAIPHONG TYPHOON
Tying the Coringa cyclone as the sixth deadliest natural disaster is the 1881 typhoon that hit the port city of Haiphong in northeastern Vietnam on October 8. This storm is also believed to have killed an estimated 300,000 people.
How the Economy Played a Role
By the end of the 19th century, New Orleans’, one of the most prosperous cities in the United States, port shipped most of the agricultural goods from the South and was the port of entry for most of the imports from South America and Europe. It had close commercial ties, including Sicily, where it bought lemons and oranges. It is also was the largest producer of sugarcane and cotton. (Norelli)
When many freed slaves moved North, Louisiana looked to Southern Italy to fill those jobs. Steamship companies recruited potential workers, and three steamships a month were running between New Orleans and Sicily by September 1881, charging forty dollars per person.
Italian-Americans were often used as cheap labor on
the docks of New Orleans at the turn of the last century
They hired Sicilians because they “would make better negros than the negros,” would work harder, “of better stock,” and would displace black workers.
However, Sicilians did not play by the unspoken rules. They lived among black Americans, worked with them, and hired them. Then they started taking the jobs that white Americans wanted to work.
“Many powerful New Orleanians resented the growing presence and economic might of Italian newcomers in the Crescent City.” (Johndeike)
Mayor Joseph A. Shakspeare expressed the common anti-Italian prejudice, complaining that the city had become attractive to “…the worst classes of Europe: Southern Italians and Sicilians…the most idle, vicious, and worthless people among us.” He claimed they were “filthy in their persons and homes” and blamed them for the spread of disease, concluding that they were “without courage, honor, truth, pride, religion, or any quality that goes to make a good citizen.”
“The crackdown on Italian immigrants in New Orleans terrified Italians across the country. In New York City, the Italian-language daily Il Progresso Italo-Americano excoriated English-language newspapers for assuming the guilt of the accused and tried to raise money for their defense. (English-language newspapers implied that the mafia funded the defense.) Italian-American newspapers correctly perceived the events in New Orleans as an attack not on the mafia, but on Italian immigrants.”
Continuous operation is introduced in the distillery with a technology patented by Ludvig Nobel.
Six tankers are delivered to Branobel.
American oil products are driven out of competition in the Russian market.
Villa Petrolea is inaugurated. Ludvig and Carl Nobel visit Baku.
A tanker, Branobel’s Sviet, arrives in London with paraffin in bulk for the first time.
The geologists break new ground.
Financial crisis at Branobel.
A railway tunnel is blasted through the Surami Pass in Georgia using Alfred Nobel’s dynamite.
Alfred Nobel resigns from Branobel’s management.
THE 1887 YELLOW RIVER FLOOD
(Image credit: STR/Getty Images)
The Yellow River (Huang He) in China was precariously situated far above most of the land around it in the late 1880s, thanks to a series of dikes built to contain the river as it flowed through the farmland of central China. Over time, these dikes had silted up, gradually lifting the river in elevation. When heavy rains swelled the river in September 1887, it spilled over these dikes into the surrounding low-lying land, inundating 5,000 square miles (12,949 square km), according to “Encyclopedia of Disasters: Environmental Catastrophes and Human Tragedies” (Greenwood Publishing Group, 2008). As a result of this flood, an estimated 900,000 to 2 million people lost their lives.
Tsar Alexander III visits Baku. Emanuel Nobel receives the Imperial Medal and is invited to become a Russian citizen.
Ludvig Nobel dies in France. His son, Emanuel, is given responsibility for Branobel and his other son, Carl, responsibility for the Engineering Works.
The Catastrophic Story Of The Johnstown Flood That Washed Away An Entire Pennsylvania Town In 1889
On May 31, 1889, the Johnstown Flood killed more than 2,200 people in southwestern Pennsylvania when the long-neglected South Fork Dam suddenly gave way.
Like many other towns in the Rust Belt, Johnstown, Pennsylvania, was a bustling community in the late 1800s and early 1900s when the steel industry was at its height. Tragically, the Johnstown Flood of 1889 wiped out nearly ten percent of the area’s booming population.
Located 60 miles east of Pittsburgh, Johnstown was built on a plain between the Little Conemaugh and Stony Creek Rivers, which made the city prone to frequent flooding. In the mid-1800s, a dam was built on the Little Conemaugh, 14 miles upstream from Johnstown, to help control these disasters.
Unfortunately, when the dam failed 50 years later, Johnstown experienced one of the most devastating floods in American history.
The Catastrophic Failure Of The South Fork Dam
In 1889, 30,000 people — many of them steelworkers — called Johnstown, Pennsylvania home. The town’s residents were used to frequent flooding when it rained heavily or when snow in the surrounding mountains melted too quickly, but they were not prepared for what happened on May 31, 1889, when the South Fork Dam collapsed.
According to HISTORY, when the dam was built in the 1840s, it was the largest earth dam in the United States. The structure of dirt and rock that held in the water of man-made Lake Conemaugh stood 72 feet tall and 900 feet long.
The dam was an essential part of a canal system that was used to transport goods along the rivers of Pennsylvania before the Industrial Revolution. However, the introduction of railroads across America eventually replaced canals as the main means of transporting goods, and the dam fell into disrepair as its maintenance was neglected.
In 1879, the South Fork Fishing and Hunting Club purchased Lake Conemaugh and the dam to use as an exclusive spot for wealthy members to go sailing, catch the fish that were stocked in the lake, and relax. Its members included some of the richest men in America, like Andrew Carnegie and Henry Clay Frick.
Despite having access to plenty of money, the club failed to properly maintain the dam. In fact, officials even lowered the height of the structure to make a wider road across the top of it and added screens to the spillway to stop the fish from swimming out, according to the National Park Service.
Both of these “improvements” greatly contributed to the dam’s failure and the subsequent Johnstown Flood.
On May 31, 1889, an engineer at the dam noticed that the spillway screens had become clogged with debris after days of heavy rain. Sensing an oncoming disaster, he rode a horse into the nearby town of South Fork to warn its residents.
Unfortunately, the telegraph lines were down. Nobody could get in contact with Johnstown.
The dam collapsed just after 3 p.m., with a loud boom that could be heard from miles away, and the entirety of Lake Conemaugh rushed forward at speeds of up to 40 miles per hour.
The residents of Johnstown, just 14 miles downstream, had no idea what was coming.
Cholera epidemic in Baku. Branobel takes the initiative in setting up a cholera hospital.
The oil companies in Baku get involved in municipal affairs.
The Development of Radio
Italian inventor Guglielmo Marconi (pictured at right) first developed the idea of a radio, or wireless telegraph, in the 1890s. His ideas took shape in 1895 when he sent a wireless Morse Code message to a source more than a kilometer away. He continued to work on his new invention, and in 1897 he received the official British patent for the radio – which was really a wireless telegraph system at first. Other inventors in Russia and the United States had been working on similar devices, but Marconi made the right political and business connections to gain the first real success with the device. By 1900 there were four competing wireless systems.
Baku’s seven biggest oil companies make an unsuccessful attempt to form a syndicate.
1893 Sea Islands storm that drowned 2,000 people in coastal Georgia and South Carolina.
The 1893 Sea Islands hurricane was a deadly major hurricane that struck the Sea Islands which was near Savannah, Georgia on August 27, 1893. It was the 7th deadliest hurricane in the United States history, and was one of the three deadly hurricanes during the 1893 Atlantic hurricane season; the storm
Branobel makes a record profit. Its capital is now 20 million roubles.
Carl Nobel dies in Zürich and his brother, Emanuel, becomes the sole head of all the Nobel companies.
Wilhelm Röntgen had a screen coated with barium platinocyanide that would fluoresce when exposed to cathode rays. On 8 November 1895, he noticed that even though his cathode-ray tube was not pointed at his screen, which was covered in black cardboard, the screen still fluoresced.
He soon became convinced that he had discovered a new type of rays, which are today called X-rays. The following year Henri Becquerel was experimenting with fluorescent uranium salts, and wondered if they too might produce X-rays.
The short-lived war between the United States and Spain began as a Cuban war for independence. American newspapers closely followed the plight of Cuban revolutionaries as they fought with Spain from 1895 to 1898,
On 1 March 1896 he discovered that they did indeed produce rays, but of a different kind, and even when the uranium salt was kept in a dark drawer, it still made an intense image on an X-ray plate, indicating that the rays came from within, and did not require an external energy source.
The railway between Baku and Batumi is destroyed by flooding.
Robert Nobel died on 7 August and Alfred Nobel on 10 December.
Tsar Nicholas’s coronation party is surrounded by drama.
Marie Curie tested samples of as many elements and minerals as she could find for signs of Becquerel rays, and in April 1898 also found them in thorium. She gave the phenomenon the name “radioactivity”.
Along with Pierre Curie and Gustave Bémont, she began investigating pitchblende, a uranium-bearing ore, which was found to be more radioactive than the uranium it contained. This indicated the existence of additional radioactive elements. One was chemically akin to bismuth, but strongly radioactive, and in July 1898 they published a paper in which they concluded that it was a new element, which they named “polonium“.
The other was chemically like barium, and in a December 1898 paper they announced the discovery of a second hitherto unknown element, which they called “radium“.
Convincing the scientific community was another matter. Separating radium from the barium in the ore proved very difficult. It took three years for them to produce a tenth of a gram of radium chloride, and they never did manage to isolate polonium.
In 1898, Ernest Rutherford noted that thorium gave off a radioactive gas. In examining the radiation, he classified Becquerel radiation into two types, which he called α (alpha) and β (beta) radiation.
Subsequently, Paul Villard discovered a third type of Becquerel radiation which, following Rutherford’s scheme, were called “gamma rays“, and Curie noted that radium also produced a radioactive gas. Identifying the gas chemically proved frustrating; Rutherford and Frederick Soddy found it to be inert, much like argon.
It later came to be known as radon.
Emanuel Nobel buys Alfred’s shares in Branobel, corresponding to 12% of the Nobel Foundation’s basic capital.
Mediation over Alfred Nobel’s will.
Emanuel acquires a license from Diesel.
The Evangelical-Protestant church is established in Baku, partly financed by Nobel.
By the turn of the century, Hollywood had a post office, markets, a hotel, a livery and even a street car. In 1902, banker and real estate mogul H. J. Whitley, also known as the “Father of Hollywood,” stepped in.
Whitley opened the Hollywood Hotel—now the site of the Dolby theater, which hosts the annual Oscars ceremony—and developed Ocean View Tract, an upscale residential neighborhood. He also helped finance the building of a bank and was integral in bringing electricity to the area.
Hollywood was incorporated in 1903 and merged with Los Angeles in 1910. At that time, Prospect Avenue became the now-famous Hollywood Boulevard.
According to industry myth, the first movie made in Hollywood was Cecil B. DeMille’s The Squaw Man in 1914 when its director decided last-minute to shoot in Los Angeles, but In Old California, an earlier film by DW Griffith had been filmed entirely in the village of Hollywood in 1910.
· 1900 Galveston hurricane
How the Galveston Hurricane of 1900 Became the Deadliest US Natural Disaster
The deadliest natural disaster in American history remains the 1900 hurricane in the island city of Galveston, Texas. On September 8, a category four hurricane descended on the town, destroying more than 3,600 buildings with winds surpassing 135 miles per hour.
Estimates of the death toll range from 6,000 to 12,000, according to the National Oceanic and Atmospheric Association. Tragically, the magnitude of the disaster could’ve been lessened if the U.S. Weather Bureau hadn’t implemented such poor communication policies.
When the storm picked up in early September of 1900, “any modestly educated weather forecaster would’ve known that” it was passing west, says Kerry Emanuel, a professor of atmospheric science at the Massachusetts Institute of Technology. Over in Cuba, where scientists had become very good at tracking storms in the hurricane-prone Caribbean, they “knew that a hurricane had passed to the north of Cuba and was headed to the Gulf of Mexico.”
The Weather Bureau in Washington, however, predicted that the storm would pass over Florida and up to New England—which was very, very wrong.
“I mean they were just way off target,” he says.
The Weather Bureau—predecessor to the National Weather Service—was only 10 years old, and hurricane science in the U.S. wasn’t very advanced. “Galveston occurred at a very interesting time in the science of hurricanes,” Emanuel notes.
The bureau’s director, Willis Moore, “was so jealous of the Cubans that he shut off the flow of data from Cuba to the U.S.,” he says. At the same time, Moore told regional U.S. forecasters that “that they could not on their own issue a hurricane warning, they had to go through Washington”—not a very quick or easy task, in those days.
The combination of blocking information from Cuba, while also making it difficult for local forecasters to report hurricanes, turned out to be deadly.
In the couple days before the storm hit, the Weather Bureau’s chief observer in Galveston, Isaac Cline, began to suspect that Washington’s forecast had been off. He tried to warn the city, but it was too late. Cline’s wife was killed, the port city was devastated, and Galveston was never able to fully recover.
The 1900 hurricane was a wake-up call that the Weather Bureau needed to have better communication channels if it wanted to keep people safe.
“The Galveston hurricane made people realize you can’t play politics with a weather bureau,” Emanuel says. “If you make it political, people will die.”
U.S. hurricane science wouldn’t really take off until the 1940s. But after Galveston, the bureau began to open up communication channels both internationally and within the the country. Although the U.S. had begun to send wireless messages out to sea before the hurricane, the practice became more widespread after Galveston.
Today, the U.S. is good at accurately forecasting hurricanes and communicating storm paths to affected areas. “We have come light years from where we were in 1900,” says Jay Barnes, a hurricane historian who has written about storms in North Carolina and Florida.
The bigger problem, which Galveston would still have faced if it had been properly warned in 1900, is the logistical challenge of evacuating large metropolitan areas in short amounts of time, Emanuel says.
In 2005, Hurricane Katrina devastated New Orleans in part because of government negligence, not an inability to accurately predict and communicate the storm’s path. Hurricane Harvey, which wreaked havoc in Houston as well as modern-day Galveston in August 2017, was also well-forecasted. But without functional emergency plans for mass evacuations, cities still end up suffering from natural disasters—even if they can see them coming.
Harvest times for the Nobel brothers’ oil adventure.
Harvest time for Branobel. The company produces 10% of the world’s oil and orders tankers to a value of SEK 12 billion.
New railway and oil pipeline constructed from Baku to Batumi.
Branobel’s attempts to merge with Standard Oil fail.
· Social unrest in Baku. 36 of Branobel’s drilling towers are destroyed.
What was life like for a young Swedish woman in Baku at the turn of the last century?
Amadeo Peter Giannini started the Bank of Italy in San Francisco in 1904. The bank was designed for the hardworking immigrants that the big banks refused to serve. After a merger he changed his bank’s name to Bank of America. After acquiring Merrill Lynch in 2008, Bank of America is now the world largest wealth manager.
He is known for this period in particular for lending Walt Disney the funds which were used to produce Snow White (1937), which was the first full-length animated film to be made in the United States.
The first Russian collective agreement is entered into. The Transcaucasian Bolshevik Party was formed in Tbilisi in Georgia.
Strikes resulted in ”Bloody Sunday” on 22 January in St Petersburg. The first organized strike, also in Baku. Extensive destruction of the oil fields.
Strikes force the Nobles’ office and works in St Petersburg to close.
COAST OF ECUADOR, 1906
This earthquake with a magnitude of 8.8 hit the coast of Ecuador and also affected the neighboring country of Columbia. The earthquake also triggered a massive tsunami that reached the coasts of San Francisco and Japan. The earthquake and tsunami together led to the deaths of between 500 and 1500 people but since it occurred more than one hundred years ago, the exact data regarding the height of the wave, a number of casualties and extent of property damage was not precisely recorded. The earthquake was caused due to the collision of the Nazca Plate and the South American Plate.
At 05:12 Pacific Standard Time on Wednesday, April 18, 1906, the coast of Northern California was struck by a major earthquake with an estimated moment magnitude of 7.9 and a maximum Mercalli intensity of XI (Extreme).
Calmer politically than the foregoing year, but it will take Branobel 9 years to pick up its former production volume again.
The Russian oil company, Steaua Romana, forms a union with Branobel, the Rothschilds and Deutsche Bank.
Chinese Famine of 1907
Ranking second in terms of the death toll, the Chinese Famine of 1907 was a short-lived event that took the lives of nearly 25 million people. East-Central China was reeling from a series of poor harvests when a massive storm flooded 40,000 square miles of lush agricultural territory, destroying 100% of the crops in the region. Food riots took place daily and were often quelled through the use of deadly force. It is estimated that, on a good day, only 5,000 were dying due to starvation. Unfortunately for the Chinese, this would not be their last great famine.
Great Chinese Famine
Much like the Soviet Famine of 1932-1933, the Great Chinese Famine was caused by Communist leaders attempting to force change upon an unwilling population. As part of their “Great Leap Forward,” owning private land was outlawed in China in 1958. Communal farming was implemented in an attempt to increase crop production. More relevant, however, was the importance the Communist Regime placed on iron and steel production. Millions of agricultural workers were forcibly removed from their fields and sent to factories to create metal.
In addition to these fatal errors, Chinese officials mandated new methods of planting. Seeds were to be planted 3-5 feet under the soil, extremely close together, to maximize growth and efficiency. In practice, what little seeds that sprouted were severely stunted in growth due to overcrowding. These failed policies, teamed with a flood in 1959 and a drought in 1960, affected the entirety of the Chinese nation. By the time the Great Leap Forward had ended in 1962, 43 million Chinese had died from the famine.
1907 was a bloodstained and tragic year for the Swedish colony in Baku.
The Osage Reign of Terror
The history behind the headright system begins in the late 1800s, when the Osage were driven off their reservation in Kansas. They ultimately settled in Northern Oklahoma, purchasing new land with the blessing of the federal government.
Then, in 1887, the Dawes Act was enacted in order to break up Indian land into parcels and allot those smaller pieces to individuals. Because the Osage had bought their land outright, though, they were better situated to resist the changes than many other tribes.
When the U.S. government finally passed a statute in 1906 to allot Osage land, it included two important provisions. First, land would only be distributed to members of the Osage tribe. Second, no matter who owned that land, the rights for mineral resources like coal, gas and oil would still be owned collectively by the tribe. The rights to the oil, in particular, would turn out to be incredibly valuable.
To divvy up the profits from the collectively owned mineral rights, a new system was put in place. Each of the 2,229 Osage members on the tribal rolls in 1907 was given an equal share of the money coming in, what would come to be called a “headright.” Private companies would lease land from the tribe in order to extract oil, gas, gravel, or coal, and then pay a percentage of what they made into a trust managed by the Bureau of Indian Affairs. The BIA, in turn, would distribute payments from that trust to headright holders.
FBI Formed July 26, 1908 (as the Bureau of Investigation)
The first production Model T was built on August 12, 1908 and left the factory on September 27, 1908, at the Ford Piquette Avenue Plant in Detroit, Michigan. On May 26, 1927, Henry Ford watched the 15 millionth Model T Ford roll off the assembly line at his factory in Highland Park, Michigan.
Observing the radioactive disintegration of elements, Rutherford and Soddy classified the radioactive products according to their characteristic rates of decay, introducing the concept of a half-life.
Branobel celebrates its 30th anniversary – but the Baku oil is running out.
Standard Oil is split up into 34 companies after the USA’s Supreme Court has decided that the company constitutes an illegal cartel.
Ludvig Nobel’s Engineering Works in St Petersburg celebrate their fiftieth anniversary.
Soddy and Kasimir Fajans independently observed in 1913 that alpha decay caused atoms to shift down two places in the periodic table, while the loss of two beta particles restored it to its original position. In the resulting reorganization of the periodic table, radium was placed in group II, actinium in group III, thorium in group IV and uranium in group VI. This left a gap between thorium and uranium. Soddy predicted that this unknown element, which he referred to (after Dmitri Mendeleev) as “ekatantalium”, would be an alpha emitter with chemical properties similar to tantalum (now known as tantalum). \
It was not long before Fajans and Oswald Helmuth Göhring discovered it as a decay product of a beta-emitting product of thorium. Based on the radioactive displacement law of Fajans and Soddy, this was an isotope of the missing element, which they named “brevium” after its short half-life. However, it was a beta emitter, and therefore could not be the mother isotope of actinium. This had to be another isotope.
Two scientists at the Kaiser Wilhelm Institute (KWI) in Berlin-Dahlem took up the challenge of finding the missing isotope. Otto Hahn had graduated from the University of Marburg as an organic chemist, but had been a post-doctoral researcher at University College London under Sir William Ramsay, and under Rutherford at McGill University, where he had studied radioactive isotopes. In 1906, he returned to Germany, where he became an assistant to Emil Fischer at the University of Berlin. At McGill he had become accustomed to working closely with a physicist, so he teamed up with Lise Meitner, who had received her doctorate from the University of Vienna in 1906, and had then moved to Berlin to study physics under Max Planck at the Friedrich-Wilhelms-Universität. Meitner found Hahn, who was her own age, less intimidating than older, more distinguished colleagues.
Hahn and Meitner moved to the recently established Kaiser Wilhelm Institute for Chemistry in 1913, and by 1920 had become the heads of their own laboratories there, with their own students, research programs and equipment.
The new laboratories offered new opportunities, as the old ones had become too contaminated with radioactive substances to investigate feebly radioactive substances. They developed a new technique for separating the tantalum group from pitchblende, which they hoped would speed the isolation of the new isotope.
World War I casualties
World War I dead and wounded
The total number of military and civilian casualties in World War I was about 40 million: estimates range from around 15 to 22 million deaths and about 23 million wounded military personnel,
Before this, various factors and groups acted (primarily at the state level) on influencing a move towards prohibition and away from a laissez-faire attitude. Cocaine consumption had grown in 1903 to about five times that of 1890, predominately by non-medical users outside the middle-aged, American, professional class. Cocaine became associated with laborers, youths, black people, and the urban underworld.
Popularization of cocaine is first evident with laborers who used it as a stimulant, often supplied by employers who falsely believed that it increased productivity. African American workers were believed by employers to be better at physical work and it was thought that it provided added strength to their constitution which, according to the Medical News, made black people “impervious to the extremes of heat and cold”. Instead, cocaine use quickly acquired a reputation as dangerous and in 1897, the first state bill of control for cocaine sales came from a mining county in Colorado. Laborers from other races used cocaine, such as in northern cities, where cocaine was often cheaper than alcohol. In the Northeast in particular, cocaine became popular amongst workers in factories, textile mills, and on railroads. In some instances, cocaine use supplemented or replaced caffeine as the drug of choice to keep workers awake and working overtime.
Fears of coerced cocaine use, and in particular that young girls would become addicted and thereby enter prostitution, were widespread. Tales of the corruption of the youth by cocaine were common but there is little evidence to support their veracity. Mainstream media reported cocaine epidemics as early as 1894 in Dallas, Texas. Reports of the cocaine epidemic would foreshadow a familiar theme in later so-called epidemics, namely that cocaine presented a social threat more dangerous than simple health effects and had insidious results when used by blacks and members of the lower class. Similar anxiety-ridden reports appeared throughout cities in the South, leading some to declare that “the cocaine habit has assumed the proportions of an epidemic among the colored people”. In 1900, state legislatures in Alabama, Georgia, and Tennessee considered anti-cocaine bills for the first time.
Persian Famine of 1917–1919
World War I brought a period of famine and sickness throughout much of Persia, then ruled by the Qajar dynasty. One of the leading factors in this famine was the successive years of severe droughts, which significantly reduced farming outputs. Additionally, what food was produced was confiscated by occupying forces. Changes to trade and general unrest during the war heightened fears and created hoarding situations, further exacerbating the situation. This, in conjunction with poor harvests and war profiteering, combined to form a famine that spread quickly throughout the area. Death toll numbers are widely debated, but most scholars estimate around 2 million lost their lives due to famine or resulting diseases.
The Russian Revolution was a period of political and social revolution that took place in the former Russian Empire, begun during the First World War. This period saw Russia abolish its monarchy and ado…
8 March 1917 – 16 June 1923 · (6 years, 3 months and 8 days)
The House of Romanov was the reigning imperial house of Russia from 1613 to 1917. They achieved prominence after the Tsarina, Anastasia Romanova, was married to the First Tsar of Russia, Ivan the Terrible. Tsar Nicholas II’s immediate family was executed in 1918, but there are still living descendants.
The Romanov family was the last imperial dynasty to rule Russia. They first came to power in 1613, and over the next three centuries, 18 Romanovs took the Russian throne, including Peter the Great, Catherine the Great, Alexander I and Nicholas II. During the Russian Revolution of 1917, Bolshevik revolutionaries toppled the monarchy, ending the Romanov dynasty. Czar Nicholas II and his entire family—including his young children—were later executed by Bolshevik troops.
The Romanov Dynasty also known as “The House of Romanov” was the second imperial dynasty (after the Rurik dynasty) to rule Russia. The Romanov family reigned from 1613 until the abdication of Tsar Nicholas II on March 15, 1917, as a result of the Russian Revolution.
The direct male line of the Romanov family came to an end when Empress Elizabeth died in 1762. The House of Holstein-Gottorp, a branch of the House of Oldenburg, ascended the throne in 1762 with Peter III, a grandson of Peter the Great. Hence, all Russian monarchs from the mid-18th century to the Russian Revolution descended from that branch. In early 1917 the extended Romanov family had 65 members, 18 of whom were killed by the Bolsheviks. The remaining 47 members escaped abroad.
The last Romanov Tsar, Nicholas II, began his reign in the autumn of 1894, when as the second Russian emperor by that name and a direct descendant of Empress Catherine the Great, he ascended the throne. His accession occurred much sooner than anyone had expected. Nicholas’ father, Tsar Alexander III, died unexpectedly at the relatively young age of 49.
Events unfolded rapidly after the passing of Alexander III. The new Tsar, aged 26, quickly married his fiancé of several months Princess Alix of Hesse – the granddaughter of Queen Victoria of England. The couple knew each other from adolescence. They were even distantly related and had numerous relatives in common, being the niece and nephew of the Prince and Princess of Wales, from different sides of the family.
Upon joining the Romanov family by marriage, Princess Alix converted from Lutheranism to Russian Orthodoxy, as stipulated by canon law, and was renamed Alexandra Feodorovna. The new Russian Empress had grown up in a very different world: the quiet duchy of Hesse by Rhine, the youngest surviving daughter of its grand duke. When she was just a child of six, Alix lost her mother, an English princess and one of Queen Victoria’s daughters, who died of diphtheria at the age of 36. At the same time, Alix also lost her little sister and playmate from the same disease. The untimely deaths of the people closest to her greatly affected the little girl. Never again was she the sunny and carefree child she had been prior to the tragedy.
Alix was 12 years old when she first met the young Tsesarevich Nicholas Romanov, the heir to the Russian throne, when in 1884 she and her family traveled to Russia to attend the wedding of her older sister Elisabeth. Grand Duchess Elisabeth Feodorovna, as she was now known, married one of Nicholas’s uncles, the Grand Duke Sergei Alexandrovich.
In the nineteenth century, many members of the European royal families were closely related to each other. Queen Victoria was referred to as “the grandmother” of Europe” because her progeny were dispersed throughout the continent through the marriages of her numerous children. Along with her royal pedigree and improved diplomatic relations among the royal houses of Greece, Spain, Germany and Russia, Victoria’s descendants received something much less desirable: a tiny defect in a gene which regulates normal blood clotting and causes an incurable medical condition called hemophilia. In the late 19th and early 20th century, patients suffering from this disease could literally bleed to death. Even the most benign bruise or bump might prove fatal. The Queen of England’s own son Prince Leopold was a hemophiliac who died prematurely after a minor automobile accident.
The hemophilia gene was also passed on to Victoria’s male grandchildren and great-grandchildren through their mothers in royal houses of Spain and Germany. Alix’s own brother died of complications from hemophilia at the age of three when he suffered relatively minor injuries after accidentally falling out of a window.
But arguably the most tragic and significant effect of the hemophilia gene occurred in the ruling Romanov family of Russia. Empress Alexandra Fedorovna learned in 1904 that she was a carrier of hemophilia a few weeks after the birth of her precious son and heir to the Russian throne, Alexei.
Because the Russian legal code contained a statute known as the semi-Salic law, only males could inherit the throne unless there were no dynastic males left. If Nicholas II did not have a son, the crown would pass to his younger brother Grand Duke Michale Alexandrovich (Mikhail). However, after 10 years of marriage and the births of four healthy grand duchesses, the long awaited son and heir was stricken by an incurable ailment. Not many subjects realized that their new Tsesarevich’s life often hung by a thread due to his deadly genetic inheritance. Alexei’s hemophilia remained a closely guarded secret of the Romanov family.
The Russian imperial family doted on the little boy; he was understandably overprotected and inevitably spoiled. In 1912, when Alexei was 8 years old, he came as close to death as he ever would after a minor accident while the Romanov family was on one of their holidays in Poland. Alexei’s life was apparently saved by the intervention of a Siberian peasant named Grigori Rasputin. It was not the first time that Rasputin’s seemingly miraculous powers had been evoked. On this occasion, Rasputin had not even been present in Poland but had communicated via a telephone call from his own home in Siberia.
An obituary to announce the passing of the heir to the throne had already been prepared by the Romanov family , and the imperial doctors had all but given up on the seemingly dying boy. But amazingly, Alexei slowly recovered after Rasputin’s telephone call. Hence the man whom Alexei’s parents referred to as “Our Friend” and “Father Grigori” solidified his role as the savior of their beloved son, as well as the Romanov family’s own spiritual advisor whom they viewed as their liaison with God.
During the summer of 1913, the Romanov family celebrated their dynasty’s tercentennial. The dark “time of trouble” of 1905 seemed like a long forgotten and unpleasant dream. To celebrate, the entire Romanov family made a pilgrimage to ancient historical landmarks around the Moscow region, and the people cheered. Nicholas and Alexandra were once again convinced that their people loved them, and that their policies were on the right track.
It would have been difficult for anyone to imagine at this time that only four years after these days of glory, the Russian revolution would depose the Romanov family from its imperial throne and the three centuries of the Romanov Dynasty would come to an end. The Tsar who was cheered enthusiastically everywhere during the celebrations of 1913 would no longer rule Russia in 1917. Instead, the Romanov family would be under arrest and a little more than a year after that, they would be dead- murdered by their own people.
Numerous factors influenced the events that led to the sudden end of a three hundred year old Russian imperial dynasty, and it would be an oversimplification to try to pinpoint something specific that caused its downfall. Terrible losses during World War I, continuous rumors and a wide-spread belief that Rasputin was ruling Russia through his influence on the imperial couple, and some other factors, caused events to spiral out of control. The bloody, tragic climax came on the night of July 17, 1918, when a Bolshevik execution squad shot, bludgeoned and bayoneted the entire Romanov family to death.
It is difficult to say whether history would have been different for the last ruling Romanov family if the random nature of genetics emerged in favor of the baby boy who was destined to inherit Russia’s crown, and if he had been born as healthy as his sisters. Would historical outcome for Russia and the world have been any different? Clearly the nature of Tsesarevich Alexei’s medical condition contributed in many ways to the downfall of the Romanov dynasty. Their heir’s hemophilia was one of the main reasons why the Tsar and Tsarina isolated themselves in Tsarskoe Selo, trying their best to keep the heir’s condition secret not just from their subjects but even from the extended Romanov family members.
Alexei’s hemophilia was the principal cause of Tsarina Alexandra’s terrible anxieties and various physical ailments, real or imagined. These led her to avoid society, thus alienating the imperial Romanov family from their subjects. This uncharacteristic behavior was misinterpreted by Russia’s aristocratic upper class and antagonized all those who might have supported Nicholas and Alexandra during difficult times. The isolation of the ruling Romanov family fostered a climate of misunderstanding, frustration and ultimately flagrant resentment.
Perhaps if more people in Russia had known about Tsesarevich Alexei’s hemophilia, they would have been able to more fully comprehend the Romanov family’s strange attachment to Grigori Rasputin. A more sympathetic appreciation of the imperial family’s plight might have defused some of the suspicions and sinister innuendos arising from the close relationship of Alexandra, in particular, with the hated Siberian peasant. The degree of Rasputin’s influence, while certainly great, was in fact, exaggerated. But often perception is reality.
There is no denying that Tsesarevich Alexei’s hemophilia was the principal reason why Grigori Rasputin came into the lives of the Romanov family in the first place. This Siberian peasant inadvertently but significantly contributed to discrediting Nicholas II as a ruler among his subjects during a major war, which led to his abdication and to his and the eventual death of the imperial Romanov family.
The story of the last reigning Romanov family continues to fascinate scholars as well as Russian history buffs. In it there is something for everyone: a great royal romance between a handsome young tsar- the ruler of one eighth of the entire world- and a beautiful German princess who gave up her strong Lutheran faith and life as she knew it, for love. There were their beautiful children: four lovely daughters, and a long awaited baby boy born with a fatal disease from which he could die at any given moment. There was the controversial “muzhik” – a peasant who seemed to have wormed his way into the imperial palace, and who was seen to have a corrupt and immoral influence on the Romanov family: the Tsar, the Empress and even their children. There was even an unlikely simpleton, or in some people’s opinion a cunning “best friend” to the Empress. This was Anna Vyrubova, who allegedly manipulated the Empress and even the Emperor behind the scenes, in league with the immoral peasant who pretended to be a “holy” man.
There were political assassinations of the powerful, shootings of the innocent, partisan intrigues, worker strikes, mass uprisings and a world war; a murder, a revolution and a bloody civil war. And finally there was regicide – the secret execution in the middle of the night of the last ruling Romanov family, their servants, even their pets in the cellar of the “House of Special Purpose” in the heart of Russia’s Urals.
For many years there were no bodies to prove that these deaths actually occurred. For more than a half a century of Soviet rule, the lack of detailed information surrounding the fate of the murdered Romanov family gave rise to numerous rumors of conspiracies and various survivors, not just in Russia but also in the West. There were those who periodically surfaced claiming to be various Romanov family members – one imperial daughter or another, the former heir, or even the Tsar himself. There were movies, cartoons and books based on the alleged survival of the most famous of all imperial daughters – the Grand Duchess Anastasia, which helped reignite interest in the last imperial Romanov family in the 21st century.
The eventual discovery and scientific identification of the Romanov family’s remains in Ekaterinburg should have put to rest all the conspiracy theories and fairy tales about the final fate of the lst Tsar and his family. But astonishingly the controversy continued, not least of all because the Russian Orthodox Church, along with one of the branches of the surviving extended Romanov family, refused to accept the definitive scientific results which proved that the remains found near Ekaterinburg indeed belonged to the murdered members of the last ruling Romanov family. Fortunately, reason prevailed and the remains were finally interred in the Romanov family crypt, where they belonged.
The Espionage Act of 1917 prohibited obtaining information, recordings (pictures or videos), or coping descriptions of any information having to do with the national defense since they believed this information could be used for the injury of the United States or to the advantage of another nation.
Why Were American Soldiers in WWI Called Doughboys?
28 July 1914 – 11 November 1918
It’s unknown exactly how U.S. service members in World War I (1914-18) came to be dubbed doughboys—the term most typically was used to refer to troops deployed to Europe as part of the American Expeditionary Forces—but there are a variety of theories about the origins of the nickname.
According to one explanation, the term dates back to the Mexican War of 1846-48, when American infantrymen made long treks over dusty terrain, giving them the appearance of being covered in flour, or dough.
As a variation of this account goes, the men were coated in the dust of adobe soil and as a result were called “adobes,” which morphed into “dobies” and, eventually, “doughboys.”
Among other theories, according to “War Slang” by Paul Dickson the American journalist and lexicographer H.L. Mencken claimed the nickname could be traced to Continental Army soldiers who kept the piping on their uniforms white through the application of clay. When the troops got rained on the clay on their uniforms turned into “doughy blobs,” supposedly leading to the doughboy moniker.
The work was interrupted by the outbreak of the First World War in 1914. Hahn was called up into the German Army, and Meitner became a volunteer radiographer in Austrian Army hospitals. She returned to the Kaiser Wilhelm Institute in October 1916. Hahn joined the new gas command unit at Imperial Headquarters in Berlin in December 1916 after traveling between the western and eastern fronts, Berlin and Leverkusen between the summer of 1914 and late 1916.
Most of the students, laboratory assistants and technicians had been called up, so Hahn, who was stationed in Berlin between January and September of 1917, and Meitner had to do everything themselves. By December 1917 she was able to isolate the substance, and after further work was able to prove that it was indeed the missing isotope.
Meitner submitted her and Hahn´s findings for publication in March 1918 to the scientific paper Physikalischen Zeitschrift under the title Die Muttersubstanz des Actiniums; ein neues radioaktives Element von langer Lebensdauer.
A great deal is at stake over the sale of Branobel.
The Russian Revolution breaks out and the Tsar abdicates on 15 March. By a Coup d’état Lenin seized power on 7 November.
On April 2, 1917, Wilson asked Congress to declare war on Germany, citing its resumed submarine attacks and its attempts to recruit Mexico as an enemy of the United States. The declaration passed by large margins in both the House (373-50) and Senate (82-6). U.S. forces suffered more than 320,000 in World War I, including more than 116,000 deaths.
Who Were the Radium Girls?
In April of 1917, Grace Fryer was an 18-year-old woman who started a new job at the United States Radium Corporation (USRC) as a dial painter. All Grace wanted was to contribute to the war effort since the United States had joined World War I just four days prior. Grace had no idea that taking this job would have a rippling impact on her life and a resounding impact on workers’ rights.
With World War I raging, hundreds of young girls and women came to the studio in Orange, New Jersey, to paint military dials and watches with a new element called radium. Marie Curie discovered this element just two short decades earlier. Dial painting paid, on average, three times more than the women would earn at a factory job, so it was in high demand.
Women who secured a position here gave them financial freedom in the midst of the female empowerment movement. Teenagers as young as 14 were employed because their smaller hands were perfect for the precision painting work. They spread the message of this elite dial painting job to their family and friends, and it wasn’t odd to see entire families of siblings working in the same space.
The way radium glowed was part of its appeal. The young women who painted the dials earned the nickname of the “ghost girls” because they would glow in the dark by the end of their shift. A lot of the girls capitalized on this glow by wearing their best dresses to work so they could visit the dance halls after. They even painted radium on their teeth for a literally glowing smile.
Grace and the other dial painters were taught a painstaking technique to help them paint the tiny numerals on the dials. They were told to put the paintbrushes between their lips and wet them to get a fine tip. Every time the young women did this, they swallowed tiny amounts of the glowing paint.
Lies and the Bitter Truth
The girls were reassured that this paint wouldn’t hurt them. They were told that there was nothing to fear, and that the paint wasn’t dangerous for anyone. Unfortunately, this simply wasn’t the truth.
Marie Curie received radiation burns when she handled the radium, and it was documented. People had died horrifying deaths before the dial painters ever started their jobs. Men employed at the radium companies wore lead aprons and didn’t handle the radium unless it was by tongs with ivory tips. On the other hand, Dial painters didn’t get a warning or any such protective measures.
This was the time in history when people widely believed that microscopic amounts of this element had health benefits. You could buy it in milk, butter, toothpaste, cosmetics, and people drank it as a tonic with water. It was believed that it would add years to your life.
However, these beliefs were pushed by research conducted by the exact radium companies that built a huge industry around it. They ignored any and all of the warning signs, and the dial painters were told that it would make their cheeks rosy.
The Beginning of the Deaths
Mollie Maggia was one of Grace’s fellow dial painters who had to quit work in 1922 due to falling ill. She had no idea what was wrong. It all started with a tooth. Her dentist removed it because it ached, and then the tooth directly next to it started aching, and her dentist extracted it. Where these teeth were previously, ulcers formed. They were full of pus and blood, seeped all of the time, and made her breath unbearable.
Mollie began to experience aching deep in her limbs and joints that got so bad she was rendered unable to walk. Her doctor prescribed aspirin and diagnosed her with rheumatism.
By May of that same year, Mollie’s health situation had transitioned to desperate. She had lost most of her teeth, and she developed what was described as “one large abscess” that encompassed the roof of her mouth, her lower jaw, and some bones inside her ears. But, this was only the beginning. She returned to the dentist with an aching jaw. When he touched it, her jaw broke off into his hand and he was able to simply lift it out of her mouth.
Mollie wasn’t the only one having severe issues. Other radium girls, including Grace Fryer, were having pains in their feet and problems with their mouths.
By September 12th, Mollie’s strange infection spread to her throat. It ate into her jugular vein, and she began hemorrhaging so fast no one could stop it. She was 24 when she died, and her doctors had no idea why. They decided her cause of death was due to a syphilis infection, but this was incorrect.
More alarmingly, the former dial painters that worked with Mollie began dying in droves.
The USRC’s Cover Up
The USRC vehemently denied any responsibility for the dial painters’ deaths for two long years. They only commissioned an expert to explore the connection between the dial painting job and the women’s deaths when business started to decline due to swirling rumors.
The study was completely independent, unlike the company’s one research and claims that radium was beneficial for your health. The study confirmed a deadly link between radium and young women’s deaths. USRC’s president was furious with these findings. He refused to accept these results, and he paid to publish new studies that came to the opposite conclusion that radium was safe. He lied to the US Department of Labor about the original report’s findings. He blamed the women publicly, and he criticized the dial painter’s efforts to get funding for their medical bills.
The Glow That Tells No Lies
With the original report covered up, the dial painters faced the challenge of proving that their illnesses had direct links to the radium they’d ingested ad worked with every day. They were against the public belief that the element was safe for use. This changed in 1925 when Harrison Martland, a doctor, created tests to prove that these women were dying from radium poisoning.
He was also able to explain what was going on in the women’s bodies in response to ingesting the radium. In 1901, evidence suggested that this element could cause injury when applied externally. Dr. Martland found out that ingesting even microscopic amounts of radium could cause catastrophic damage.
All of the radium they’d ingested had settled throughout their bodies. It essentially bored holes inside their bones to create a honeycomb pattern when they were alive. Grace Fryer ended up in a steel back brace when her spine collapsed. Several dial painter’s legs spontaneously fractured and shortened, and another girl lost her jaw.
CHR, National Archives, Chicago damaged skeletal systems began to emit an eerie glow from the radium embedding itself the boring through: the glow that tells no lies. Sometimes, the women didn’t even realize they had radium poisoning until they looked at their reflection in a mirror in the dead of night. Then, a glowing specter stared back that sealed her fate.
Dr. Martland realized that ingesting radium was fatal. Since it was embedded so deeply into the women, there wasn’t a way to remove it.
The Fight Begins
The radium industry, as a whole, tried to discredit Dr. Martland’s findings. However, they failed to account for the dial painter’s tenacity and courage. They came together to fight against the radium industry. This was essential because young women were still working as dial painters across the US.
Grace Fryer led the movement. At the time, poisoning from radium wasn’t considered a disease that a victim could gain compensation for. It wasn’t even known until the dial painters started falling ill. There was also a statute of limitations with a two-year cutoff date. Most dial painters didn’t start showing signs of poisoning until the five-year mark.
However, the tides turned in 1927 when a young lawyer accepted the radium girls’ case. Raymond Berry led Grace and her colleagues into the middle of a courtroom drama that gained international attention. However, time wasn’t on their side. Grace and her friends were given just four short months to live, and the radium companies were determined to drag it out. They ended up settling out of court, but they brought attention to radium poisoning.
This case was plastered all over the front pages of newspapers. A dial painter named Catherine Wolfe from Illinois read about the case with growing horror. However, Radium Dial followed USCR’s lead and denied any responsibility for the poisonings. They denied it despite the fact that they gave the Illinois dial painters medical tests that showed they had radium poisoning. The firm went as far as interfering with their worker’s autopsies. There were reports that company officials tried to cover it up by stealing the very bones from the corpses themselves.
If the dial painters didn’t die from jaw issues like the ones that plagued Mollie, they would eventually form large cancerous bone tumors called sarcomas. These tumors could grow anywhere. Irene La Porte was a dial painter who died from developing a pelvic tumor that was “larger than two footballs.”
Catherine Wolfe developed a tumor the size of a grapefruit on her hip in 1938. She also lost her teeth, and she reportedly held a handkerchief to her mouth to try and absorb the blood and pus from the abscesses. Her friends were dying in rapid succession, and it only steeled her resolve.
Catherine Wolfe didn’t start fighting for justice until America was in the midst of the Great Depression in the 1930s. The public shunned Catherine and the other dial painters for having the gall to sue one of the few big employers left. In 1938, Catherine gave evidence against the radium companies on her deathbed. Catherine, with help from a lawyer who worked pro bono Leonard Grossman, won the case.
This case, so often forgotten, was one of the first instances where an employer was made to be directly responsible for their employee’s health. It began life-saving regulations. Ultimately, it led to the founding of OSHA (Occupational Safety and Health Administration). Today, OSHA operates throughout the United States on a national level. These young women also left a scientific legacy that is invaluable.
But, you won’t find their names put into history books or taught in most college courses. They have largely been lost and forgotten. Grace Fryer, Catherine Wolfe, Mollie Maggia, and countless others need to be remembered. They shone a light on the dark side of corporations, and they’re continuing to shine to this day.
Although Fajans and Göhring had been the first to discover the element, custom required that an element was represented by its longest-lived and most abundant isotope, and brevium did not seem appropriate. Fajans agreed to Meitner and Hahn naming the element protactinium, and assigning it the chemical symbol Pa. In June 1918, Soddy and John Cranston announced that they had extracted a sample of the isotope, but unlike Hahn and Meitner were unable to describe its characteristics.
The Democratic Republic of Azerbaijan declares itself independent on 28 May.
The brothers, Gösta and Emil Nobel, are arrested in St Petersburg but succeed in escaping and are united with the entire Nobel family in Stockholm for Christmas.
Emanuel Nobel and his sister in law, Genia, flee Baku via Berlin to Stockholm.
The oil industry in Russia is nationalized in June.
In the 1918 flu pandemic, not wearing a mask was illegal in some parts of America. What changed?
But one notable difference is that it was the United States which led the world in mask wearing.
In October 1918, as San Francisco received the pandemic’s second wave, hospitals began reporting a rise in the number of infected patients.
On October 24, 1918, the city’s elected legislative body, the Board of Supervisors of San Francisco, realizing that drastic action needed to be taken with over 4,000 cases recorded, unanimously passed the Influenza Mask Ordinance.
The wearing of face masks in public became mandatory on US soil for the first time.
Adoption of masks
After San Francisco made masks mandatory in public, an awareness campaign began.
The city’s mayor, along with members of the Board of Health, endorsed a Red Cross publicity blitz which told the public: “Wear a Mask and Save Your Life! A Mask is 99% Proof Against Influenza.” Songs were written about mask wearing, including one ditty that featured the lyrics: “Obey the laws, and wear the gauze. Protect your jaws from septic paws.”
Anyone found outdoors without a mask could be fined or even imprisoned.
Newspapers, and various state governments in the US, linked masks to the ongoing war on the battlefields of Europe in October 1918 – “Gas Masks in the Trenches; Influenza Masks at Home” promised the Washington Times newspaper on September 26, 1918, reporting that 45,000 masks would be provided to US soldiers to ward off “the Spanish Flu.”
When the First World War ended on November 11, gas-mask manufacturers fulfilling government contracts switched to influenza masks.
Policing mask wearing
Mask-wearing laws largely had public support and were mostly policed by consent.
Tucson, Arizona, issued a face mask ordinance on November 14, 1918, with exemptions for preachers, singers and actors in theatres and schoolteachers – all thought to be far enough away from their audiences. Soon after, Police Chief Bailey told the Tucson Citizen not that he was threatening arrest to miscreants but rather that, in his opinion: “No gatherings will be considered fashionable unless the attendees are attired in masks.”
Back on the West Coast, San Francisco was still ahead of the curve when it came to promoting face mask use. On October 25, 1918, the San Francisco Chronicle ran front page pictures of the city’s top judges and leading politicians all wearing face masks.
Soon there was no escaping wearing a mask. All trains arriving at west coast stations were to be met by mask encouragement committees, groups of female volunteers with masks for those who had not managed to procure one out of state.
Of course, there were some who flouted the rules. At a boxing match in California, a photograph taken with a flashlight showed that 50% of the men in the audience weren’t wearing masks. Police enlarged the picture and used it to identify the mask-less.
Each man was warned to make a “voluntary contribution” to a charity for the men fighting overseas, or face prosecution.
Did mask wearing work?
During the 1918 flu pandemic, scientific research around mask use was still largely anecdotal – and the compelling story of one ocean liner caught people’s attention.
In early December 1918, the Times newspaper in London reported that it had been established, by doctors in the United States, that the influenza was “contact-borne and consequently preventable.”
The Times noted that in one London hospital all staff and patients had been issued with, and instructed to constantly wear, face masks. The newspaper cited the successes of face masks on one ship.
The ocean liner sailing between the United States and England had suffered a terrible infection rate coming from New York, the Times reported. When returning to the United States, the captain instituted a face-mask order for crew and passengers, after having read about their use in San Francisco.
No infections were reported on the return trip, despite high infection rates at the time in both Manhattan and Southampton, from where the ship departed. It was impossible to know if the rules on masks on the return voyage were responsible for the lack of infections, but that was how the press interpreted it.
There was some precedent behind the mask guidance.
During the Great Manchurian Plague of 1910-1911, which saw Chinese, Russian, Mongolian and Japanese scientists come together to combat a widespread outbreak of bubonic plague in northern China, face masks had been deemed effective.
Science journalist Laura Spinney, author of the 2017 book “The Pale Rider: The Spanish Flu of 1918 and How it Changed the World,” notes that after their experiences in Manchuria in 1911, the Japanese took swiftly to wearing masks in public in 1918.
The Japanese authorities argued that masks were a courteous gesture in protecting others from germs and had been effective in previous, more localized, outbreaks of disease in Japan.
And mask wearing did seem to have a flattening effect on infection rates.
By late December, cities and states in America were feeling confident enough to lift the mask wearing ordinances, as new infections dwindled to single figures in most places.
“Today is the last time for the little gauze face pest,” announced a Chicago newspaper on December 10, 1918.
A century later
In 1918, America adopted mask wearing with a vengeance.
But a century later, it is Asian countries which have remembered the lessons the US learned about the benefits of mask wearing in slowing the spread of infection.
Perhaps that is because in the intervening years Asia has dealt with ongoing outbreaks of cholera, typhoid and other transmittable diseases, right up to SARS in 2003 and avian flu more recently.
Those outbreaks have helped to maintain a mask-wearing culture.
America and Europe have not seen similar outbreaks with such regularity.
So, it seems, the notion of masks as a prophylactic measure has skipped the consciousness of several generations. The coronavirus might be about to change that.
The Treaty of Versailles was a peace treaty signed on 28 June 1919 between Germany and the Allied Powers at the end of World War I .
It was signed in the Palace of Versailles in France . The treaty imposed harsh terms on Germany, such as accepting the responsibility for causing the war, paying reparations, losing territories and colonies, and limiting its military . The treaty was controversial and widely criticized by many people, especially in Germany, who felt it was unfair and humiliating . Some historians argue that the treaty contributed to the rise of Nazism and the outbreak of World War II ..
The Treaty of Versailles was signed in the Hall of Mirrors in the Palace of Versailles, France
Branobel’s business in Russia is now run from Paris, and later from Stockholm.
World Changes Due to Radio
In the boom of the 1920s, people rushed to buy radios, and business and social structures adapted to the new medium. Universities began to offer radio-based courses; churches began broadcasting their services; newspapers created tie-ins with radio broadcasts.
By 1922 there were 576 licensed radio broadcasters and the publication Radio Broadcast was launched, breathlessly announcing that in the age of radio, “government will be a living thing to its citizens instead of an abstract and unseen force.”
As with television in later years, however, entertainment came to rule the radio waves much more than governmental or educational content, as commercial sponsors wanted the airtime they paid for to have large audiences. Most listeners enjoyed hearing their favorite music, variety programs that included comic routines and live bands, and serial comedies and dramas. Broadcasts of major sports events became popular as the medium matured and remote broadcasts became possible.
Radio was a key lifeline of information for the masses in the years of World War II. Listeners around the world sat transfixed before their radio sets as vivid reports of battles, victories, and defeats were broadcast by reporters including H.V. Kaltenborn and Edward R. Murrow. Franklin D. Roosevelt (at right), Winston Churchill, Adolph Hitler and other political leaders used the medium to influence public opinion.
The League of Nations was the first worldwide intergovernmental organization whose principal mission was to maintain world peace. It was founded on 10 January 1920 by the Paris Peace Conference that ended the First World War
Standard Oil buys half of the shares in Branobel on 30 July.
· Azerbaijan is occupied by the Red Army and becomes a Soviet Republic on 28 April.
THE 1920 HAIYUAN EARTHQUAKE
“The Haiyuan earthquake was the largest quake recorded in China in the 20th century with the highest magnitude and intensity,” Deng Qidong, a geologist with the Chinese Academy of Sciences, said during a seminar in 2010.
The earthquake, which struck north central China’s Haiyuan County on Dec. 16, 1920, also rocked the neighboring Gansu and Shaanxi Provinces. It was reportedly a 7.8 on the Richter scale, however, China today claims it was of magnitude 8.5. There are also discrepancies in the number of lives lost. The USGS reported total casualties of 200,000, but according to a 2010 study by Chinese seismologists, the death toll could have been as high as 273,400. The region’s high deposits of loess soils (porous, silty sediment that’s very unstable) triggered massive landslides which were responsible for over 30,000 of these deaths, according to a 2020 study published in the journal Landslides.
Prohibition began in 1920 with the passing of the Volstead Act. The Eighteenth Amendment of the United States Constitution, prohibiting the production and selling of “intoxicating liquors,” had been ratified in 1919, and the Volstead Act was enacted in order to enforce and regulate the Amendment. Here, alcohol seized by police is dumped into sewage drains in New York.
|This liquor store advertises that “The time is getting shorter and so is our stock…” as Prohibition begins in 1920.|
|This illegal whiskey distillery near Detroit is put out of business.|
|During Prohibition, Americans were forced to hide their bottles of alcohol in creative ways.|
Flappers of the 1920s were young women known for their energetic freedom, embracing a lifestyle viewed by many at the time as outrageous, immoral or downright dangerous. Now considered the first generation of independent American women, flappers pushed barriers to economic, political and sexual freedom for women.
|Gilda Gray was not the first to dance the shimmy, but she made it popular nationwide in the 1920s. The young saloon singer went to New York to perform in vaudeville and joined the Ziegfeld Follies in 1922. By then Gray was known as the Shimmy Queen, and made several Hollywood movies between 1919 and 1936.
Zelda Fitgerald was an author and the wife of F. Scott Fitzgerald. Her lifestyle made her a celebrity outside the literary world, and her husband called her “the first American Flapper.” The two were notorious for public partying, and their drunken antics were a staple of society headlines in the 1920s. From 1930 on, Zelda was in and out of mental hospitals for the rest of her life.
Anita Page started her career in silent films and made an easy transition to “talkies” soon after. She cranked out many films between 1925 and 1933, and came out of retirement occasionally to act again until her death in 2008. At that time, she was hailed as the last silent film star.
Norma Talmadge was one of the biggest silent film stars ever. Between 1910 and 1930, she acted in 160 films and produced 25! Talmadge was also a smart businesswoman. She and her much older husband, Joseph Schenck, formed the Norma Talmadge Film Corporation in 1917, giving them control over her work. The corporation generated profits way beyond what a film actress of the time could have made.
|Bessie Smith began singing in minstrel shows and cabarets in 1912. She toured with vaudeville jazz shows for two decades, singing the blues, and more importantly for history, recording music. Her last recording session was in 1933; she died in an auto accident in 1937.|
|Dorothy Parker wrote poetry, short stories, and essays, and was a founding member of the Algonquin Round Table, a group of fashionable writers and celebrities who met for lunch and drinks and whose lifestyles influenced the smart set from 1919 to 1929.|
|Josephine Baker achieved some fame in New York as a singer, dancer, and comedienne, but when she went to Paris in 1925, she became an international superstar. Baker’s performances ranged from striptease to opera, and were acclaimed from all sides. Baker became a French citizen in 1937. Her work with the French Resistance during World War II earned her the Croix de Guerre. Baker was also active in the Civil Rights movement in America. However, during the 1920s, she was just the most exotic, sexy, and talented woman in Europe.|
|Norma Shearer appeared uncredited in the 1920 film The Flapper when she was 18 years old. Between 1920 and 1942, she starred in dozens of films, many under the supervision of MGM executive Irving Thalberg, whom she married in 1927.|
|Coco Chanel had a brief career on stage in the early 20th century, but will always be known for her fashion designs and the line of clothing and perfume that carries her name. By 1920, the French designer had introduced her “chemise,” the simple, short, and loose dress that allowed flappers the freedom of movement to dance the night away.|
What Is a Flapper?
No one knows how the word flapper entered American slang, but its usage first appeared just following World War I.
The classic image of a flapper is that of a stylish young party girl. Flappers smoked in public, drank alcohol, danced at jazz clubs and practiced sexual freedom that shocked the Victorian morality of their parents.
Flappers were famous—or infamous, depending on your viewpoint—for their rakish attire.
They donned fashionable flapper dresses of shorter, calf-revealing lengths and lower necklines, though not typically form-fitting: Straight and slim was the preferred silhouette.
Flappers wore high heel shoes and threw away their corsets in favor of bras and lingerie. They gleefully applied rouge, lipstick, mascara and other cosmetics, and favored shorter hairstyles like the bob.
Designers like Coco Chanel, Elsa Schiaparelli and Jean Patou ruled flapper fashion. Jean Patou’s invention of knit swimwear and women’s sportswear like tennis clothes inspired a freer, more relaxed silhouette, while the knitwear of Chanel and Schiaparelli brought no-nonsense lines to women’s clothing. Madeleine Vionnet’s bias-cut designs (made by cutting fabric against the grain) emphasized the shape of a woman’s body in a more natural way.
SoFme famous flappers were role models, either in real life or in the movies or other entertainment venues, and others only became famous later, but all looked wonderful in photographs of the 1920s.
Some famous flappers were role models, either in real life or in the movies or other entertainment venues, and others only became famous later, but all looked wonderful in photographs of the 1920s.
The Mafia, a network of organized-crime groups based in Italy and America, evolved over centuries in Sicily, an island ruled until the mid-19th century by a long line of foreign invaders. Sicilians banded together in groups to protect themselves and carry out their own justice. In Sicily, the term “mafioso,” or Mafia member, initially had no criminal connotations and was used to refer to a person who was suspicious of central authority. By the 19th century, some of these groups emerged as private armies, or “mafie,” who extorted protection money from landowners and eventually became the violent criminal organization known today as the Sicilian Mafia. The American Mafia, which rose to power in the 1920s, is a separate entity from the Mafia in Italy, although they share such traditions as omerta, a code of conduct and loyalty.
The Mafia in the United States emerged in impoverished Italian immigrant neighborhoods or ghettos in New York’s East Harlem (or Italian Harlem), the Lower East Side, and Brooklyn; also emerging in other areas of the Northeastern United States and several other major metropolitan areas (such as New Orleans and Chicago) during the late 19th century and early 20th century, following waves of Italian immigration especially from Sicily and other regions of Southern Italy. It has its roots in the Sicilian Mafia but is a separate organization in the United States. Campanian, Calabrian and other Italian criminal groups in the U.S., as well as independent Italian American criminals, eventually merged with Sicilian Mafiosi to create the modern pan-Italian Mafia in North America. Today, the American Mafia cooperates in various criminal activities with Italian organized crime groups, such as the Sicilian Mafia, the Camorra of Campania and the ‘Ndrangheta of Calabria. The most important unit of the American Mafia is that of a “family“, as the various criminal organizations that make up the Mafia are known. Despite the name of “family” to describe the various units, they are not familial groupings.
The Mafia’s Sicilian Roots
For centuries, Sicily, an island in the Mediterranean Sea between North Africa and the Italian mainland, was ruled by a long line of foreign invaders, including the Phoenicians, Romans, Arabs, French and Spanish. The residents of this small island formed groups to protect themselves from the often-hostile occupying forces, as well as from other regional groups of Sicilians. These groups, which later became known as clans or families, developed their own system for justice and retribution, carrying out their actions in secret. By the 19th century, small private armies known as “mafie” took advantage of the frequently violent, chaotic conditions in Sicily and extorted protection money from landowners. From this history, the Sicilian Mafia emerged as a collection of criminal clans or families.
Did you know? The Sicilian Mafia is one of four major criminal networks currently based in Italy; the other three are the Camorra of Naples, the Ndrangheta of Calabria and the Sacra Corona Unita of Puglia.
Although its precise origins are unknown, the term Mafia came from a Sicilian-Arabic slang expression that means “acting as a protector against the arrogance of the powerful,” according to Selwyn Raab, author of “Five Families: The Rise, Decline, and Resurgence of America’s Most Powerful Mafia Empires. Raab notes that until the 19th century, the word “mafioso” did not refer to someone who was a criminal, but rather a person who was suspicious of central authority. In the 1860s, a play called “I Mafiusi della Vicaria” (“Heroes of the Penitentiary”), about a group of inmates at a Sicilian prison who maintained their own hierarchy and rituals, toured Italy and helped popularize the term Mafia in the Italian language.
The Mafia on the Rise in Italy
In 1861, Sicily became a province of recently unified Italy. However, chaos and crime reigned across the island as the fledgling Italian government tried to establish itself. In the 1870s, Roman officials even asked Sicilian Mafia clans to help them by going after dangerous, independent criminal bands; in exchange, officials would look the other way as the Mafia continued its protection shakedowns of landowners. The government believed this arrangement would be temporary, lasting just long enough for Rome to gain control; instead, the Mafia clans expanded their criminal activities and further entrenched themselves in Sicilian politics and the economy. The Mafia became adept at political corruption and intimidated people to vote for certain candidates, who were in turn beholden to the Mafia. Even the Catholic Church was involved with Mafia clans during this period, according to Raab, who notes that the church relied on Mafiosi to monitor its massive property holdings in Sicily and keep tenant farmers in line.
In order to further strengthen themselves, Sicilian clans began conducting initiation ceremonies in which new members pledged secret oaths of loyalty. Of chief importance to the clans was omerta, an all-important code of conduct reflecting the ancient Sicilian belief that a person should never go to government authorities to seek justice for a crime and never cooperate with authorities investigating any wrongdoing.
How Prohibition Created the Mafia
The Mafia’s influence in Sicily grew until the 1920s, when Prime Minister Benito Mussolini came to power and launched a brutal crackdown on mobsters, who he viewed as a threat to his Fascist regime. However, in the 1950s, the Mafia rose again when mob-backed construction companies dominated the post-World War II building boom in Sicily. Over the next few decades, the Sicilian Mafia flourished, expanding its criminal empire and becoming, by the 1970s, a major player in international narcotics trafficking.
The American Mafia, a separate entity from the Mafia in Sicily, came to power in the 1920s Prohibition era after the success of Italian-American neighborhood gangs in the booming bootleg liquor business. By the 1950s, the Mafia (also known as Cosa Nostra, Italian for “Our Thing”) had become the preeminent organized-crime network in the United States and was involved in a range of underworld activities, from loan-sharking to prostitution, while also infiltrating labor unions and legitimate industries such as construction and New York’s garment industry. Like the Sicilian Mafia, American Mafia families were able to maintain their secrecy and success because of their code of omerta, as well as their ability to bribe and intimidate public officials, business leaders, witnesses and juries.
For these reasons, law-enforcement agencies were largely ineffective at stopping the Mafia during the first part of the 20th century. However, during the 1980s and 1990s, prosecutors in America and Italy began successfully employing tough anti-racketeering laws to convict top-ranking mobsters. Additionally, some Mafiosi, in order to avoid long prison terms, began breaking the once-sacred code of omerta and testified against fellow mob members. By the start of the 21st century, after hundreds of high-profile arrests over the course of several decades, the Mafia appeared to be weakened in both countries; however, it was not eliminated completely and remains in business today.
Russian Famine of 1921
The early 20th century was a tumultuous time for Russians, as they lost millions in World War I, experienced a violent revolution in 1917, and suffered from multiple Civil Wars. The Bolshevik soldiers often forced peasants to sacrifice their food throughout the wars, with little in return. As such, many peasants stopped growing crops, as they could not eat what they sowed. This resulted in a massive shortage of food and seed. Many peasants had taken to eating seeds, as they knew they could not eat any crops they grew. By 1921, 5 million Russians had perished.
International oil companies get together to boycott all business with the Bolsheviks, but they fail.
The Bank of Italy merged with the smaller Bank of America, Los Angeles in 1928. In 1930, Giannini changed the name from “Bank of Italy” to “Bank of America”. As chairman of the new, larger Bank of America, Giannini expanded the bank throughout his tenure, which continued until his death in 1949
The Okeechobee hurricane of 1928, also known as the San Felipe Segundo hurricane, was one of the deadliest hurricanes in the recorded history of the North Atlantic basin, and the fourth deadliest hurricane in the United States, only behind the 1900 Galveston hurricane, 1899 San Ciriaco hurricane, and Hurricane
The Great Depression (1929–1939) was an economic shock that impacted most countries across the world.
Wall Street Crash of 1929
They acknowledged Hahn´s and Meitner’s priority, and agreed to the name. The connection to uranium remained a mystery, as neither of the known isotopes of uranium decayed into protactinium. It remained unsolved until uranium-235 was discovered in 1929.
For their discovery Hahn and Meitner were repeatedly nominated for the Nobel Prize in Chemistry in the 1920s by several scientists, among them Max Planck, Heinrich Goldschmidt, and Fajans himself.
TORCHES OF FREEDOM CAMPAIGN
Originally, there were misconceptions that women do not smoke, particularly those that were considered nice or good girls. Indeed, while tobacco had been consumed in America in the late nineteenth century, it was not until 1929 that women were really expected or even allowed to partake in the consumption of tobacco products.
1929: Efforts to get Women to Smoke
In the 1920s smoking was rare among women. However, passage of the 19th Amendment ushered in new freedoms and smoking in public became symbolic of women’s new role in society. American Tobacco taps into the women’s cigarette market with the marketing slogan “Reach for a Lucky instead of sweet.”
Almost every magazine and medical journal featured cigarette advertisement featuring opera singers, athletes, doctors, senators and movie stars.
1930: Radio Commercials
Every major radio show featured tobacco advertisements. Jack Benny would seamlessly weave the advertisement into his comedy hour.
Hoover Dam is a concrete arch-gravity dam in the Black Canyon of the Colorado River, on the border between the U.S. states of Nevada and Arizona. It was constructed between 1931 and 1936 during the Great Depression and was dedicated on September 30, 1935,
THE 1931 YANGTZE RIVER FLOODS
(Image credit: STR/Getty Images)
Excessive rainfall over central China in July and August of 1931 triggered the most deadly natural disaster in world history — the Central China floods of 1931. The Yangtze River overtopped its banks as spring snowmelt mingled with the over 24 inches (600 millimeters) of rain that fell during the month of July alone. (The Yellow River and other large waterways also reached high levels.) According to “The Nature of Disaster in China: The 1931 Yangzi River Flood” (Cambridge University Press, 2018), the flood inundated almost 70,000 square miles (180,000 square km) and turned the Yangtze into what looked like a giant lake or ocean. Contemporary government numbers put the number of dead at around 2 million, but other agencies, including NOAA, say it may have been as many as 3.7 million people.
Soviet Famine of 1932-1933
Incredibly, the severity of this famine was not fully known in the West until the collapse of the USSR in the 1990s. The main cause was the policy of collectivization administered by Josef Stalin. Under collectivization, large swaths of land would be converted into collective farms, all maintained by peasants. Stalin implemented this by destroying the peasants’ existing farms, crops, and livestock and forcibly taking their land. Reports of peasants hiding crops for individual consumption led to wide-scale search parties, and any hidden crops found were destroyed. In actuality, many of these crops were simply seeds that would be planted shortly. The destruction of these seeds and the forced collectivization of land caused mass starvation, killing an estimated 10 million people.
This was the first observation of a nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. A fully artificial nuclear reaction and nuclear transmutation was achieved in April 1932 by Ernest Walton and John Cockcroft, who used artificially accelerated protons against lithium, to break this nucleus into two alpha particles.
The feat was popularly known as “splitting the atom”, but was not nuclear fission; as it was not the result of initiating an internal radioactive decay process. Just a few weeks before Cockcroft and Walton’s feat, another scientist at the Cavendish Laboratory, James Chadwick, discovered the neutron, using an ingenious device made with sealing wax, through the reaction of beryllium with alpha particles.
Chadwick noted that being electrically neutral, neutrons would be able to penetrate the nucleus more easily than protons or alpha particles. Enrico Fermi and his colleagues in Rome—Edoardo Amaldi, Oscar D’Agostino, Franco Rasetti and Emilio Segrè—picked up on this idea. Rasetti visited Meitner’s laboratory in 1931, and again in 1932 after Chadwick’s discovery of the neutron.
The discovery of the neutron by James Chadwick in 1932 created a new means of nuclear transmutation. Enrico Fermi and his colleagues in Rome studied the results of bombarding uranium with neutrons, and Fermi concluded that his experiments had created new elements with 93 and 94 protons, which his group dubbed ausenium and hesperium.
Meitner showed him how to prepare a polonium-beryllium neutron source. On returning to Rome, Rasetti built Geiger counters and a cloud chamber modeled after Meitner’s. Fermi initially intended to use polonium as a source of alpha particles, as Chadwick and Curie had done. Radon was a stronger source of alpha particles than polonium, but it also emitted beta and gamma rays, which played havoc with the detection equipment in the laboratory.
But Rasetti went on his Easter vacation without preparing the polonium-beryllium source, and Fermi realized that since he was interested in the products of the reaction, he could irradiate his sample in one laboratory and test it in another down the hall. The neutron source was easy to prepare by mixing with powdered beryllium in a sealed capsule.
Moreover, radon was easily obtained; Giulio Cesare Trabacchi had more than a gram of radium and was happy to supply Fermi with radon. With a half-life of only 3.82 days it would only go to waste otherwise, and the radium continually produced more.
Working in assembly-line fashion, they started by irradiating water, and then progressed up the periodic table through lithium, beryllium, boron and carbon, without inducing any radioactivity. When they got to aluminum and then fluorine, they had their first successes. Induced radioactivity was ultimately found through the neutron bombardment of 22 different elements.
Meitner was one of the select group of physicists to whom Fermi mailed advance copies of his papers, and she was able to report that she had verified his findings with respect to aluminum, silicon, phosphorus, copper and zinc.
When a new copy of La Ricerca Scientifica arrived at the Niels Bohr’s Institute for Theoretical Physics at the University of Copenhagen, her nephew, Otto Frisch, as the only physicist there who could read Italian, found himself in demand from colleagues wanting a translation.
The Rome group had no samples of the rare earth metals, but at Bohr’s institute George de Hevesy had a complete set of their oxides that had been given to him by Auergesellschaft, so de Hevesy and Hilde Levi carried out the process with them.
When the Rome group reached uranium, they had a problem: the radioactivity of natural uranium was almost as great as that of their neutron source. What they observed was a complex mixture of half-lives.
Following the displacement law, they checked for the presence of lead, bismuth, radium, actinium, thorium and protactinium (skipping the elements whose chemical properties were unknown), and (correctly) found no indication of any of them.
Fermi noted three types of reactions were caused by neutron irradiation: emission of an alpha particle (n, α); proton emission (n, p); and gamma emission (n, γ). Invariably, the new isotopes decayed by beta emission, which caused elements to move up the periodic table.
Franklin Delano Roosevelt , commonly known as FDR, was an American statesman and politician who served as the 32nd president of the United States from 1933 until his death in 1945.
Executive Order 6102 required all persons to deliver on or before May 1, 1933, all but a small amount of gold coin, gold bullion, and gold certificates owned by them to the Federal Reserve in exchange for $20.67 (equivalent to $433 in 2021) per troy ounce. Under the Trading with the Enemy Act of 1917,
This situation has continued absolutely uninterrupted since March 9, 1933. We have been in a state of declared national emergency for nearly 63 85 years without knowing it.
According to current laws, as found in 12 USC, Section 95(b), everything the President or the Secretary of the Treasury has done since March 4, 1933 is automatically approved
The FDIC, or Federal Deposit Insurance Corporation, is an agency created in 1933 during the depths of the Great Depression to protect bank depositors and ensure a level of trust in the American banking system. After the stock market crash of 1929, anxious people withdrew their money from banks in cash, causing a devastating wave of bank failures across the country.
From 1933 until 1941, President Roosevelt’s New Deal programs and policies did more than just adjust interest rates, tinker with farm subsidies and create short-term make-work programs.
What Happened in 1933 Important News and Events, Key Technology and Popular Culture
1933 Major News Stories including 25% US unemployment, Repeal of prohibition laws, Strong Winds With Drought strip topsoil causing dust storms, This was the one of the worst years of the great depression and very few countries around the world not affected, Alcatraz becomes a federal penitentiary, Civil War In Cuba,
1933 was the worst year of the depression with unemployment peaking at 25.2% with ( 1 in 4 people unemployed ) this year . Adolf Hitler became the chancellor of Germany and opened the first concentration camp at Dachau. 10’s of thousands traveled the road and rail in America looking for work , and the US banking system which was under great strain was propped up by the US government ( US banking act of 1933 )to try and stop the panic of people withdrawing their money from the banks. the continuing drought in the Midwest made even more of the land into dust bowls
1933 was the one of the worst years during the great depression and Very few countries around the world were not affected and because of the Great Depression many leaders came to power who may not have done in normal times, a good example of this is the rise of Hitler in Germany where unemployment was high and following their defeat in World War I German morale was at an all time low. Back in the thirties each country did their own thing including increasing interest rates, Implementing Import Tariffs, leaving the Gold Standard. Because of lack of a world wide coherent strategy many feel the Great Depression lasted longer than it should have done.
United States — Dust Bowl Strong winds strip the topsoil from the drought affected farms in Midwest creating Dust Bowls
1. Due To Farm Automation ( Tractors and Beginning Of Combines ) From 1925 to 1930 Land Under Cultivation Quadruples
2. Poor Farming Practices also increase with agricultural methods used that encouraged erosion.
3. First Severe Dust Storms Begin Late this year stripping away the topsoil
4. Severe Drought begins the following year and Dust Storms Strip away more topsoil leading to the area of the Great Plains Becoming a Dust Bowl
5. Over the next few years due to soil erosion 2.5 million people are forced to leave the Great Plains states because they could no longer earn a living from the land.
5. Following the Dust Bowl years the government implemented massive conservation efforts and education including soil conservation and anti-erosion techniques, including crop rotation, and farmers changed farming practices allowing much of the land to recover.
United States — Repeal of prohibition
Repeal of prohibition in United States allowing 3.2% beer and wine sales when the 21st Amendment is passed repealing the 18th Amendment
1933 was the one of the worst years during the great depression and Very few countries around the world were not affected and because of the Great Depression many leaders came to power who may not have done in normal times, a good example of this is the rise of Hitler in Germany where unemployment was high and following their defeat in World War I German morale was at an all time low. Back in the thirties each country did their own thing including increasing interest rates, Implementing Import Tariffs, leaving the Gold Standard. Because of lack of a world wide coherent strategy many feel the Great Depression lasted longer than it should have done.
2. Prohibition was a reform movement sponsored by evangelical Protestant churches
3. The Anti-Saloon League became the political driving force for Prohibition ( 1890 – 1920 )
4. The 18th Amendment / the Volstead Act ( Prohibition ) began on January 17th, 1920
5. Gangsters, including Al Capone controlled speakeasies and Bootlegging making millions
6. Bootlegging (illegal business of smuggling alcoholic beverages)
7. Speakeasies (establishment that illegally sells alcoholic beverages )
8. Speakeasies also called blind pig bars or blind tiger bars
9. Prohibition laws were enforced differently around the country
10. 21st Amendment is passed repealing the 18th Amendment ( Repeal of prohibition ) on December 5,
After the end of World War II British Army officer Major Ivan Hirst, REME Supervises Production of the Volkswagen at the Factory.
The name Volkswagen Beetle was never officially labeled until 1968 ( was called Volkswagen Type 1 up till then.
4. Japanese Scientist shows Machine Gun firing 1,000 shots per minute
5. Current Machine Guns can fire 4,000 shots per minute
United States – Golden Gate Bridge Construction
Construction of San Francisco’s Golden Gate Bridge began during January. The iconic orange suspension bridge was designed in the art deco style by Irving Morrow and the chief engineers were Joseph Strauss and Charles Ellis. It was completed in 1937 and connected the city of San Francisco and Marin County, California. The bridge spans 4,200 feet and is one of the longest and tallest suspension bridges in the world. It cost between $27 million and $35 million to build at the time and was financed by a bond measure due to a lack of readily available funds during the Great Depression.
United States – National Labor Board
The United States government established the National Labor Board (NLB) during August. The agency was created as a part of the National Industrial Recovery Act and National Recovery Administration. The purpose of the National Labor Board was to mediate labor disputes between labor unions and employers. It consisted of seven members, three heads of labor unions, three heads of industry, and the chairman who was US Senator Robert F. Wagner. The NLB had very little authority and President Roosevelt made several executive orders to strengthen it but in 1934 it was dissolved and replaced by the National Labor Relations Board which took the lessons from the NLB and became a more useful organization.
United States – CWA Created
The Civil Works Administration creates temporary construction jobs as a part of the New Deal.
More Information for the Civil Works Administration
The CWA was a project created under the Federal Emergency Relief Administration (FERA). The CWA created construction jobs, mainly improving or constructing buildings and bridges. It ended on March 31, 1934, after spending $200 million a month and giving jobs to four million people
U.S. President Franklin D. Roosevelt unveils the Civil Works Administration (CWA) during November . As part of the “New Deal” the Civil Works Administration was created as a short-term employment operation that helped create temporary construction jobs for about 4 million people. The CWA was dismantled by March of 1934 but for a brief time it helped millions get back to work in the midst of the Great Depression. The jobs provided by the CWA helped to build much needed infrastructure across the country. The CWA was later replaced with the more ambitious Works Progress Administration (WPA).
Albert Einstein emigrates to the United States from Germany Physicist Albert Einstein renounces his German citizenship as he moves to the United States to become a professor of theoretical physics at Princeton in October . Einstein had been known as a pacifist and was Jewish, both factors that contributed to his fear of the new Nazi government that was taking over Germany. Einstein went on to urge President Roosevelt to increase American efforts on researching uranium with the intention of developing nuclear weaponry before Germany had a chance to do so. Einstein did not gain his United States citizenship until 1940.
Worldwide – First Solo Around the World Flight
Wiley Post becomes the first person to fly solo around the world in July. Post left from Floyd Bennett Field in New York and returned after 7 days, 18 hours, and 49 minutes. Post made several stops along the way including one in Berlin, several in the Soviet Union, Alaska, and Canada. He flew a Lockheed 5C Vega named “Winnie Mae.” He had previously set the record for a non-solo around the world flight in the same place with navigator Harold Gatty in 1931.
United States — The Great Depression
Unemployment In The United States Reaches Highest Level in the winter of 1932 / 1933 with nearly 1 in 3 people unemployed
The 1933 World’s Fair opened on May 27 and was held in Chicago. It was formally known as the “Century of Progress International Exposition.” The exposition was held at Northerly Island near the Museum Campus area of Chicago’s lake front. The theme of many exhibits and structures at the World’s Fair featured the Art Deco and Art Moderne design styles which were at the height of their popularity and the exposition was intended to provide hope for a better future during the Great Depression. Much of the funding for the fair was provided privately as there was a lack of public resources available. It was set to end in November of 1933 but was so popular that it was extended through October 31, 1934.
U.S. – First Drive-In Movie Theater is opened in New Jersey.
United Kingdom — Nuclear Reaction
More Information and Timeline for Leo Szilard Nuclear Reaction.
Physicist Leo Szilard comes up with the idea for a nuclear chain reaction on the 12th of September while in London. In 1934 he filed for a patent on his idea of a “neutronic reactor.” He continued to work on this idea and joined forces with another famed physicist Enrico Fermi. They came to the conclusion that an atomic bomb could be made with Uranium. Due to this conclusion, in 1939 Szilard wrote a letter to President Roosevelt that was signed by Albert Einstein urging the U.S. government to research and develop nuclear weaponry before Nazi Germany could do so themselves.
This lead to the creation of the Manhattan Project. In 1942, Szilard and Fermi created the first man-made self-sustaining nuclear chain reaction in December of 1942.
The film King Kong premieres during March.
The film `King Kong” premieres at Radio City Music Hall and the RKO Roxy in New York City during March . The film, considered by many to be one of the best horror films ever made, told the story of a gigantic prehistoric ape who was found and taken to New York by a film crew where he wreaks havoc on the city. Much of the plot revolves around King Kong’s infatuation with an actress from the crew but the film also features many adventure-filled moments and battle scenes between King Kong and either the humans or other creatures in his surrounding environment. The film starred Fay Wray, Robert Armstrong, and Bruce Cabot. “King Kong” was quite successful after its release and received mostly positive reviews as it was considered ground-breaking in its use of special effects at that time in cinema history.
Electron Microscope Germany by Ernst Ruska
Shirley Temple signs a contract with Fox when she was 5 years old
The Chocolate chip cookie is invented
The Board Game Monopoly is invented
Adolf Hitler appointed Chancellor of Germany Germany
Adolf Hitler bans all other political parties turning Germany into a One Party State
Enabling Act The Reichstag passes the Enabling Act, making Adolf Hitler dictator of Germany. and the Gestapo is established
Germany Quits the League of Nations causing concern over German Intentions towards France and other European Countries.
The inoculation in the fight against diphtheria begins in the Western World
The 20th Amendment to the US Constitution Is Ratified on January 23rd which establishes the beginning and ending of the terms of the elected federal officials.
Wiley Post becomes the first man to fly solo around the world
Alcatraz becomes a federal penitentiary.
The dirigible airship “The Akron” crashes
On September 30th British Prime Minister Neville Chamberlain returns from Germany and announced to the world “Peace In Our Time”
Typhus Epidemic Typhus Epidemic strikes thousands in Moscow
Civil War In Cuba forcing American businesses to close up shop.
Yellow River breaches it’s banks in China creating mass starvation in 33 due to crop failure.
Tornado’s strike Kentucky and Tennessee leaving 61 dead
The Loch Ness Monster is sighted for the first time in modern times on May 2
Major World Political Leaders 1933
Australia — Prime Minister — Joseph Lyons —
Brazil — President — Getúlio Vargas —
Canada — Prime Minister — Richard Bedford Bennett —
Germany — Chancellor — Adolf Hitler — From January 30
Italy — Prime Minister — Benito Mussolini —
Japan — Prime Minister — Makoto Saito —
Mexico — President — Abelardo L. Rodríguez —
Russia / Soviet Union — General Secretary of the Central Committee — Joseph Stalin —
South Africa — Prime Minister — James Barry Munnik Hertzog —
United States — President — Herbert Hoover — Till March 4,
United States — President — Franklin D. Roosevelt — From March 4,
United Kingdom — Prime Minister — Ramsay MacDonald —
THE BANKRUPTCY OF THE UNITED STATES
By Trasael Adnepos June 2001
People see more inclined to research and investigate root causes and actual conditions in hard times, so I am posting this information once more, in the hope some of my countrymen who do not UNderstand what is presently happening will become aware that they are UNaware, and in awakening, awaken others to the end game. I really was a railway switchman once, before I enlisted in the Army, and I hear a night train coming, and am pretty sure the engineer is asleep or dead…
If you don’t mind, this isn’t a thread about the merits of metals, but about recent history, and worse things we can expect if we don’t wake up and demand some accountability from our alleged servants in government.
The fact of the matter is, the United States did go “Bankrupt” in 1933 and was declared so by President Roosevelt by Executive Orders 6073, 6102, 6111 and by Executive Order 6260 on March 9, 1933, under the “Trading With The Enemy Act” of October 6, 1917, AS AMENDED by the Emergency Banking Relief Act, 48 Stat 1, Public Law No.1, which is presently codified at 12 USCA 95a and confirmed at 95b. You can confirm this for yourself by reading it on FindLaw.
Thereafter, Congress confirmed the bankruptcy on June 5, 1933, and thereupon impaired the obligations and considerations of contracts through the “Joint Resolution To Suspend The Gold Standard And
Abrogate The Gold clause, June 5, 1933″ (See: HJR-192, 73rd Congress, 1st Session). When the Courts were called upon to rule on various of the provisions designed to implement and compliment FDR’s Emergency BANKING Relief Act of March 9, 1933, they were all found unconstitutional, so what FDR did was simply stack the “Court’s” with HIS chosen obsequious members of the bench/bar and then sent many of the cases back through and REVERSED the rulings.
House Joint Resolution 192 (HJR-192), 48 Stat. 112, was passed by Congress on June 5, 1933. The ‘Act’ impaired the obligations and considerations of contacts and declared that the notes of the Federal Reserve banks were “legal tender” for the payment of both public and private debts, and that payment in gold Coin was against “public policy”. (In effect, FDR and Congress, under executive orders and legislative fiat, nationalized the people’s money, i.e., their gold Coin. Nationalization is a violation of the Law of Nations and existing public policy of Congress. See: Hilton vs. Guyot, 159 U.S. 113 (1895). The gold Coin that was confiscated (nationalized) was later used to purchase voting stockholder shares in The Bank and The Fund at $35 per ounce.) At this point in time, “Fair Market Value”, i.e., a willing seller and buyer, without compulsion, lost any substantial meaning.
Moreover, all of the Governor’s of the several States of the Union, who were summoned to and were in Washington, D.C. during the several days of this pre-planned economic “Emergency” (the first phase of which was to nationalize and expropriate the people’s Money, i.e., their gold Coin on deposit in the banks), pledged the full faith and credit thereof to the aid of the National Government, and formed various socialist committees, such as the “Council of State Governments”, “Social Security Administration”, etc., to purportedly deal with the economic “Emergency.”
The Council of State Governments has been absorbed into such things as the National Conference Of Commissioners On Uniform State Laws, whose headquarters is located in Chicago, Illinois, and “all” being “members of the Bar”, and operating under a different “Constitution and By-Laws”, far distant from the depositories of the public records, and it is this organization that has promulgated, lobbied for, passed, adjudicated and ordered the implementation and execution of their purported “Uniform” and “Model” Acts and pretended statutory provisions, in order to “help implement international treaties of the United States or where world uniformity would be desirable.” (1990/91 Reference Book, NCCUSL).
These organizations operate under the “Declaration of INTERdependence” of January 22, 1937, and published some of their activities in “The Book Of The States.” The 1937 Edition openly declares that the people engaged in such activities as the Farming/Husbandry Industry had been reduced to mere feudal “Tenants” on the Land they supposedly owned.
On April 25, 1938, the supreme Court overturned the standing precedents of the prior 150 years concerning “common law,” in the federal government.
“THERE IS NO FEDERAL COMMON LAW, and CONGRESS HAS NO POWER TO
DECLARE SUBSTANTIVE RULES OF COMMON LAW applicable IN A STATE,
WHETHER they be LOCAL or GENERAL in their nature, be they COMMERCIAL
LAW OR a part of the LAW OF TORTS.” — Erie Railroad Co. vs. Tompkins, 304 U.S.
64, 82 L.Ed. 1188.
You must realize that the Common Law is the fountain source of Substantive and Remedial Rights, if not our very Liberties.
The members and association of the Bar thereafter formed committees, granted themselves special privileges, immunities and franchises, and held meetings concerning the Judicial procedures, and further, amended laws so as “to conform to a trend of judicial decisions or to accomplish similar objectives”, including hodepodging the jurisdictions of Law and Equity together, which is known today as “One Form Of Action.” This was not by accident, but by a carefully conceived plan.
The enumerated, specified and distinct Jurisdictions established by the ordained
Constitution (1787), Article III, Section 2, and under the Bill of Rights (1791),
Amendment VII, were further hodgepodged and fundamentally changes in 1982 to
3 include Admiralty jurisdiction, which was once again brought inland.
“This is the FUNDAMENTAL CHANGE necessary to effect unification of Civil and
Just as the 1938 Rules ABOLISHED THE
DISTINCTION between actions At Law and suits in Equity, this CHANGE WOULD
ABOLISH THE DISTINCTION between CIVIL actions and suits in ADMIRALTY.”
(See: Federal Rules Of Civil Procedure, 1982 Ed., pg.17; also see, Federalist Papers No.
83; Declaration Of Resolves Of The First Continental Congress, October 14, 1774;
Declaration Of Cause And Necessity Of Taking Up Arms, July 6, 1775; Declaration Of
Independence, July 4, 1776; and, Bennet vs. Butterworth, 52 U.S. 669)
The United States thereafter entered the second World War during which time the
“League of Nations” was reinstituted under PRETENSE of the “United Nations” (22
USCA 287, et seq.), and the “Bank For International Settlements” was reinstituted under
PRETENSE of the “Bretton Woods Agreement” (22 USCA 286 et seq.) as the
“International Monetary Fund” (The Fund) and the “International Bank For
Reconstruction And Development” (The Bank or World Bank).
The United States as a corporate body politic (artificial), came out of World War II in
worse economic condition that when it entered, and in 1950 declared Bankruptcy and
“Reorganization.” The Reorganization is located in Title 5 of the United States Codes
Annotated. The “Explanation” at the beginning of 5 U.S.C.A. is MOST informative
reading. The “Secretary of Treasury” was appointed as the “Receiver” in Bankruptcy.
(See: Reorganization Plan No. 26, 5 U.S.C.A. 903; Public Law 94-564, Legislative
History, pg. 5967)
The United States went down the road and periodically filed for further
Reorganizations. Things and situations worsened, having done what they were
Commanded NOT to do (See: Madison’s Notes, Constitutional Convention, August 16,
1787; Federalist Papers No. 44), and in 1965 crowned their continuous fraudulent
acitivities with passage of the “Coinage Act of 1965” completely debasing the
Constitutional Coin (gold & silver, i.e., “Dollar”). (See: 18 USCA 331 & 332; U.S. vs.
Marigold, 50 U.S. 560, 13 L.Ed 257) At the signing of the Coinage Act on July 23,
1965, Lyndon B. Johnson stated in his press release that:
“When I have signed this bill before me, we will have made the first fundamental
change in our coinage in 173 years. The Coinage Act of 1965 supersedes the Act of
1792. And that Act had the title: An Act Establishing a Mint and Regulating the
Coinage of the United States….”
“Now I will sign this bill to make the first change in our coinage system since the 18th
Century. To those members of Congress, who are here on this historic occasion, I want
to assure you that in making this change from the 18th Century we have no idea of
returning to it.”
It is important to take cognizance of the fact that NO Constitutional Amendment was
EVER obtained to FUNDAMENTALLY CHANGE, amend, abridge or abolish the
Constitutional mandates, provisions or prohibitions, but due to internal and external
diversions surrounding the Viet Nam War, etc., the USURPATION and BREACH went
unchallenged and unnoticed by the general public at large, who had become “a wealthy
man’s cannon fodder or cheap source of slave labor”. (See: Silent Weapons For Quiet
Wars, pgs. 6, 7, 8, 9, 12, 13 & 56) Congress was clearly delegated the Power and
Authority to regulate and maintain the true and inherent “value” of the Coin within the
scope and purview of Article I, Section 8, Clauses 5 & 6 and Article I, Section 10,
Clause I, of the ordained Constitution (1787), and further, a corresponding DUTY and
OBLIGATION to maintain said gold and silver Coin and Foreign Coin at and within the
necessary and proper “equal weights and measures” clause. (See also: Bible,
Deuteronomy, Chapter 25, verses 13 thru 16; Proverbs, Chapter 16, Verse 11; Public
Those exercising the Offices of the several States, in equal measure, knew that such “De
Facto Transitions” were unlawful and unauthorized, but sanctioned, implemented and
enforced the complete debauchment and the resulting “governmental, social, industrial
economic change” in the De Jure States and in the United States of America, and were
and are now under the delusion that they can do both directly and indirectly what they
were absolutely prohibited from doing. (See: Craig vs. Missouri, 4 Peters 903).
You can confirm the whole affair by taking a look at 12 USC 95a and 95b. In addition,
the various Reorganization Acts listed in Title 5 of the United States Code. There are
your legal public record and historic proofs. Now we are going to hear from a former
Congressman who (surprise!) ended up indicted and in federal prison, while more
brazen felons continued to run the Congress:
United States Congressional Record, March 17, 1993 Vol. 33, page H-1303
Speaker-Rep. James Traficant, Jr. (Ohio) addressing the House:
“Mr. Speaker, we are here now in chapter 11.. Members of Congress are official trustees
presiding over the greatest reorganization of any Bankrupt entity in world history, the
U.S. Government. We are setting forth hopefully, a blueprint for our future. There are
some who say it is a coroner’s report that will lead to our demise.
It is an established fact that the United States Federal Government has been dissolved
by the Emergency Banking Act, March 9, 1933, 48 Stat. 1, Public Law 89-719; declared
by President Roosevelt, being bankrupt and insolvent. H.J.R. 192, 73rd Congress m
session June 5, 1933 – Joint Resolution To Suspend The Gold Standard and Abrogate
The Gold Clause dissolved the Sovereign Authority of the United States and the official
capacities of all United States Governmental Offices, Officers, and Departments and is
further evidence that the United States Federal Government exists today in name only.
The receivers of the United States Bankruptcy are the International Bankers, via the
United Nations, the World Bank and the International Monetary Fund. All United States
Offices, Officials, and Departments are now operating within a de facto status in name
only under Emergency War Powers. With the Constitutional Republican form of
Government now dissolved, the receivers of the Bankruptcy have adopted a new form
of government for the United States. This new form of government is known as a
Democracy, being an established Socialist/Communist order under a new governor for
America. This act was instituted and established by transferring and/or placing the
Office of the Secretary of Treasury to that of the Governor of the International
Monetary Fund. Public Law 94-564, page 8, Section H.R. 13955 reads in part: “The
U.S. Secretary of Treasury receives no compensation for representing the United
Gold and silver were such a powerful money during the founding of the united states of
America, that the founding fathers declared that only gold or silver coins can be
“money” in America.
Since gold and silver coinage were heavy and inconvenient for a
lot of transactions, they were stored in banks and a claim check was issued as a money
substitute. People traded their coupons as money, or “currency.” Currency is not money,
but a money substitute. Redeemable currency must promise to pay a dollar equivalent in
gold or silver money.
Federal Reserve Notes (FRNs) make no such promises, and are
not “money.” A Federal Reserve Note is a debt obligation of the federal United States
government, not “money?’ The federal United States government and the U.S. Congress
were not and have never been authorized by the Constitution for the united states of
America to issue currency of any kind, but only lawful money, -gold and silver coin.
It is essential that we comprehend the distinction between real money and paper money
One cannot get rich by accumulating money substitutes, one can only get
deeper into debt. We the People no longer have any “money.” Most Americans have not
been paid any “money” for a very long time, perhaps not in their entire life. Now do you
comprehend why you feel broke? Now, do you understand why you are “bankrupt,”
along with the rest of the country?
Federal Reserve Notes (FRNs) are unsigned checks written on a closed account. FRNs
are an inflatable paper system designed to create debt through inflation (devaluation of
6 currency). when ever there is an increase of the supply of a money substitute in the
economy without a corresponding increase in the gold and silver backing, inflation
Inflation is an invisible form of taxation that irresponsible governments inflict on their
citizens. The Federal Reserve Bank who controls the supply and movement of FRNs has
everybody fooled. They have access to an unlimited supply of FRNs, paying only for
the printing costs of what they need. FRNs are nothing more than promissory notes for
U.S. Treasury securities (T-Bills) – a promise to pay the debt to the Federal Reserve
There is a fundamental difference between “paying” and “discharging” a debt. To pay a
debt, you must pay with value or substance (i.e. gold, silver, barter or a commodity).
With FRNs, you can only discharge a debt. You cannot pay a debt with a debt currency
system. You cannot service a debt with a currency that has no backing in value or
substance. No contract in Common law is valid unless it involves an exchange of “good
& valuable consideration.” Unpayable debt transfers power and control to the sovereign
power structure that has no interest in money, law, equity or justice because they have
so much wealth already.
Their lust is for power and control. Since the inception of central banking, they have
controlled the fates of nations.
The Federal Reserve System is based on the Canon law and the principles of
sovereignty protected in the Constitution and the Bill of Rights. In fact, the international
bankers used a “Canon Law Trust” as their model, adding stock and naming it a “Joint
Stock Trust.” The U.S. Congress had passed a law making it illegal for any legal
“person” to duplicate a “Joint Stock Trust” in 1873. The Federal Reserve Act was
legislated post-facto (to 1870), although post-facto laws are strictly forbidden by the
The Federal Reserve System is a sovereign power structure separate and distinct from
the federal United States government. The Federal Reserve is a maritime lender, and/or
maritime insurance underwriter to the federal United States operating exclusively under
Admiralty/Maritime law. The lender or underwriter bears the risks, and the Maritime
law compelling specific performance in paying the interest, or premiums are the same.
Assets of the debtor can also be hypothecated (to pledge something as a security without
taking possession of it.) as security by the lender or underwriter. The Federal Reserve
Act stipulated that the interest on the debt was to be paid in gold. There was no
stipulation in the Federal Reserve Act for ever paying the principle.
Prior to 1913, most Americans owned clear, allodial title to property, free and clear of
any liens or mortgages until the Federal Reserve Act (1913)
“Hypothecated” all property within the federal United States to the Board of Governors
of the Federal Reserve, -in which the Trustees (stockholders) held legal title. The U.S.
citizen (tenant, franchisee) was registered as a “beneficiary” of the trust via his/her birth
certificate. In 1933, the federal United States hypothecated all of the present and future
properties, assets and labor of their “subjects,” the 14th Amendment U.S. citizen, to the
Federal Reserve System.
In return, the Federal Reserve System agreed to extend the federal United States
corporation all the credit “money substitute” it needed. Like any other debtor, the
federal United States government had to assign collateral and security to their creditors
as a condition of the loan. Since the federal United States didn’t have any assets, they
assigned the private property of their “economic slaves”, the U.S. citizens as collateral
against the unpayable federal debt. They also pledged the unincorporated federal
territories, national parks forests, birth certificates, and nonprofit organizations, as
collateral against the federal debt. All has already been transferred as payment to the
Unwittingly, America has returned to its pre-American Revolution, feudal roots
whereby all land is held by a sovereign and the common people had no rights to hold
allodial title to property. Once again, We the People are the tenants and sharecroppers
renting our own property from a Sovereign in the guise of the Federal Reserve Bank.
We the people have exchanged one master for another.
This has been going on for over eighty years without the “informed knowledge” of the
American people, without a voice protesting loud enough. Now it’s easy to grasp why
America is fundamentally bankrupt.
Why don’t more people own their properties outright?
Why are 90% of Americans mortgaged to the hilt and have little or no assets after all
debts and liabilities have been paid? Why does it feel like you are working harder and
harder and getting less and less?
We are reaping what has been sown, and the results of our harvest is a painful
bankruptcy, and a foreclosure on American property, precious liberties, and a way of
life. Few of our elected representatives in Washington, D.C. have dared to tell the truth.
The federal United States is bankrupt. Our children will inherit this unpayable debt, and
the tyranny to enforce paying it.
America has become completely bankrupt in world leadership, financial credit and its
reputation for courage, vision and human rights. This is an undeclared economic war,
bankruptcy, and economic slavery of the most corrupt order! Wake up America! Take
back your Country.”
So there it is. No wild-eyed “conspiracy theories”, just the facts, witnessed and
recorded. If you are here to defend the status quo, please don’t bother, but please do
answer whether or not you believe any citizen would be liable to criminal prosecution if
we modeled our lives or financial affairs after the conduct of what passes for “our”
government. If you care what happens to US next, tell ten people to tell ten more. The
hour is late. Conspiracy is not “a theory”, it is a federal felon
Based on the periodic table of the time, Fermi believed that element 93 was ekarhenium—the element below rhenium—with characteristics similar to manganese and rhenium. Such an element was found, and Fermi tentatively concluded that his experiments had created new elements with 93 and 94 protons, which he dubbed ausonium and hesperium. The results were published in Nature in June 1934. However, in this paper Fermi cautioned that “a careful search for such heavy particles has not yet been carried out, as they require for their observation that the active product should be in the form of a very thin layer. It seems therefore at present premature to form any definite hypothesis on the chain of disintegrations involved. “In retrospect, what they had detected was indeed an unknown rhenium-like element, technetium, which lies between manganese and rhenium on the periodic table.
Leo Szilard and Thomas A. Chalmers reported that neutrons generated by gamma rays acting on beryllium were captured by iodine, a reaction that Fermi had also noted. When Meitner repeated their experiment, she found that neutrons from the gamma-beryllium sources were captured by heavy elements like iodine, silver and gold, but not by lighter ones like sodium, aluminium and silicon.
The current model of the nucleus in 1934 was the liquid drop model first proposed by George Gamow in 1930. His simple and elegant model was refined and developed by Carl Friedrich von Weizsäcker and, after the discovery of the neutron, by Werner Heisenberg in 1935 and Niels Bohr in 1936, it agreed closely with observations. In the model, the nucleons were held together in the smallest possible volume (a sphere) by the strong nuclear force, which was capable of overcoming the longer ranged Coulomb electrical repulsion between the protons. The model remained in use for certain applications into the 21st century, when it attracted the attention of mathematicians interested in its properties, but in its 1934 form it confirmed what physicists thought they already knew: that nuclei were static, and that the odds of a collision chipping off more than an alpha particle were practically zero.
The Federal Housing Administration is a government agency that FDR established in 1934 to combat the housing crisis of the Great Depression
Emanuel Nobel dies.
She concluded that slow neutrons were more likely to be captured than fast ones, a finding she reported to Naturwissenschaften in October 1934. Everyone had been thinking that energetic neutrons were required, as was the case with alpha particles and protons, but that was required to overcome the Coulomb barrier; the neutrally charged neutrons were more likely to be captured by the nucleus if they spent more time in its vicinity. A few days later, Fermi considered a curiosity that his group had noted: uranium seemed to react differently in different parts of the laboratory; neutron irradiation conducted on a wooden table induced more radioactivity than on a marble table in the same room. Fermi thought about this and tried placing a piece of paraffin wax between the neutron source and the uranium.
This resulted in a dramatic increase in activity. He reasoned that the neutrons had been slowed by collisions with hydrogen atoms in the paraffin and wood. The departure of D’Agostino meant that the Rome group no longer had a chemist, and the subsequent loss of Rasetti and Segrè reduced the group to just Fermi and Amaldi, who abandoned the research into transmutation to concentrate on exploring the physics of slow neutrons.
The Birth Of Cultural Marxism: How The “Frankfurt School” Changed America
We often ask ‘How did America get in the condition it is in so quickly?’ In this article you will find that there have long been ‘termites’ in the ‘foundation’ eating away at our legal, educational, marital, spiritual, political and cultural institutions. The ‘termites’ are antiChrist in nature and their goal is the destruction of Christianity and America. We’ve come to know this condition as Political Correctness or more accurately as Cultural Marxism.
The 1950’s were a simple, romantic, and golden time in America.
California beaches, suburbia, and style. Atlas Shrugged was published, NASA was formed, and Elvis rocked the nation. Every year from 1950–1959 saw over 4 million babies born. The nation stood atop the world in every field.
Americans were once known to be the best in everything. They had confidence and were not afraid of anything. America was the envy of the world, including the Communist and Bolshevik world who hated all that Americans stood for: truth, justice and Christianity. Communism stood for none of those traits and in fact devised strategies to destroy them. The Frankfurt School of Critical Theory was the antiChristian weapon that would be brought into America specifically to undermine and destroy American institutions, Liberty and Christianity.
It was an era of great economic prosperity in The Land of the Free.
So, what happened to the American traits of confidence, pride, and accountability?
The roots of Western cultural decay are very deep, having first sprouted a century ago. It began with a loose clan of ideologues inside Europe’s communist movement. Today, it is known as the Frankfurt School, and its ideals have perverted American society.
When Outcomes Fail, Just Change the Theory
Before WWI, Marxist theory held that if war broke out in Europe, the working classes would rise up against the bourgeoisie and create a communist revolution.
Well, as is the case with much of Marxist theory, things didn’t go too well. When war broke out in 1914, instead of starting a revolution, the proletariat put on their uniforms and went off to war.
After the war ended, Marxist theorists were left to ask, “What went wrong?”
Marxist Antonio Gramsci. Gramsci is best known for his theory of cultural hegemony, which describes how the state and ruling capitalist class – the bourgeoisie – use cultural institutions to maintain power in capitalist societies. This is commonly called Critical Theory today.
Two very prominent Marxists thinkers of the day were Antonio Gramsci and Georg Lukács. Each man, on his own, concluded that the working class of Europe had been blinded by the success of Western democracy and capitalism. They reasoned that until both had been destroyed, a communist revolution was not possible.
Gramsci and Lukács were both active in the Communist party, but their lives took very different paths.
Gramsci was jailed by Mussolini in Italy where he died in 1937 due to poor health.
In 1918, Lukács became minister of culture in Bolshevik Hungary. During this time, Lukács realized that if the family unit and sexual morals were eroded, society could be broken down.
Lukács implemented a policy he titled “cultural terrorism,” which focused on these two objectives. A major part of the policy was to target children’s minds through lectures that encouraged them to deride and reject Christian ethics.
In these lectures, graphic sexual matter was presented to children, and they were taught about loose sexual conduct.
Here again, a Marxist theory had failed to take hold in the real world. The people were outraged at Lukács’ program, and he fled Hungary when Romania invaded in 1919.
The Birth of Cultural Marxism
Georg Lukács was a Hungarian Marxist philosopher, aesthetician, literary historian, and critic. He was one of the founders of Western Marxism through the Frankfurt School. In Hungary Lukács was made People’s Commissar for Education and Culture. As a Marxist theoretician, he developed the idea of “Revolution and Eros” — sexual instinct used as an instrument of destruction.
All was quiet on the Marxist front until 1923 when the cultural terrorist turned up for a “Marxist study week” in Frankfurt, Germany. There, Lukács met a young, wealthy Marxist named Felix Weil.
Until Lukács showed up, classical Marxist theory was based solely on the economic changes needed to overthrow class conflict. Weil was enthused by Lukács’ cultural angle on Marxism.
Weil’s interest led him to fund a new Marxist think tank—the Institute for Social Research. It would later come to be known as simply The Frankfurt School.
In 1930, the school changed course under new director Max Horkheimer. The team began mixing the ideas of Sigmund Freud with those of Marx, and cultural Marxism was born.
In classical Marxism, the workers of the world were oppressed by the ruling classes. The new theory was that everyone in society was psychologically oppressed by the institutions of Western culture. The school concluded that this new focus would need new vanguards to spur the change. The workers were not able to rise up on their own.
As fate would have it, the National Socialists came to power in Germany in 1933. It was a bad time and place to be a Jewish Marxist, as most of the school’s faculty was. So, the school moved to New York City, the bastion of Western culture at the time.
Coming to America
Max Horkheimer was a dedicated Marxist who helped to create what is known as Critical Theory through radical Marxism. In 1930 he joined with a Marxist study group, started by Felix Weil, which evolved into the Institute for Social Research and is now known as the Frankfurt School of Critical Theory. Critical Theory maintains that ideology (Christianity) is the principal “obstacle” to human liberation. Horkheimer defined Critical theory in a 1937 essay “Traditional and Critical Theory” as “a social theory oriented toward critiquing and changing society as a whole…” This is the “fundamental change” promoted by Barack Hussein Obama in 2008 and 2012, whereby radical and fundamental Talmudic Marxism was thrust into the foundation of America. This is a radical contrast to traditional America which is based and founded upon Christianity, in all our founding documents and laws.
In 1934, the Frankfurt School was reborn at Columbia University. Its members began to exert their ideas on American culture. It was at Columbia University that the school honed the tool it would use to destroy Western culture: the printed word.
The school published a lot of popular material. The first of these was Critical Theory.
Critical Theory is a play on semantics. The theory was simple: criticize every pillar of Western culture—family, democracy, common law, freedom of speech, and others. The hope was that these pillars would crumble under the pressure.
Next, was a book Theodor Adorno co-authored, The Authoritarian Personality. It redefined traditional American views on gender roles and sexual mores as “prejudice.” Adorno compared them to the traditions that led to the rise of fascism in Europe.
Is it just a coincidence that the go-to slur for the politically correct today is “fascist”?
Herbert Marcuse was a Marxist at Columbia University at the Institute for Social Research (aka Frankfurt School). Marcuse promoted the “free love” movement in the 1960’s to break down Christian morality to open avenues for Marxist thought. Marcuse and the Frankfurt School professors worked to promote a state of hopelessness and alienation that they considered necessary to provoke a socialist revolution. His “liberating tolerance” ideas are the basis for today’s “social justice warriors.”
The school pushed its shift away from economics and toward Freud by publishing works on psychological repression.
Their works split society into two main groups: the oppressors and the victims. They argued that history and reality were shaped by those groups who controlled traditional institutions. At the time, that was code for males of European descent.
From there, they argued that the social roles of men and women were due to gender differences defined by the “oppressors.” In other words, gender did not exist in reality but was merely a “social construct.”
A Coalition of Victims
Adorno and Horkheimer returned to Germany when WWII ended. Herbert Marcuse, another member of the school, stayed in America. In 1955, he published Eros and Civilization.
In the book, Marcuse argued that Western culture was inherently repressive because it gave up happiness for social progress.
The book called for “polymorphous perversity,” a concept crafted by Freud. It posed the idea of sexual pleasure outside the traditional norms. Eros and Civilization would become very influential in shaping the sexual revolution of the 1960s.
Theodor W. Adorno was a Marxist and leading member of the Frankfurt School of Critical Theory insisting that the works of Freud, Marx, and Hegel were essential to a critique of modern society. He was drummed out of Germany as a communist. Author Dinesh D’Souza has revealed how Adorno redefined Fasicsm as a “far right” phenomenon instead of a “leftist” political ideology. Adorno propagated the sleight of hand agenda as a strategy to destroy the west.
Marcuse would be the one to answer Horkheimer’s question from the 1930s: Who would replace the working class as the new vanguards of the Marxist revolution?
Marcuse believed that it would be a victim coalition of minorities—blacks, women, and homosexuals.
The social movements of the 1960s—black power, feminism, gay rights, sexual liberation—gave Marcuse a unique vehicle to release cultural Marxist ideas into the mainstream. Railing against all things “establishment,” The Frankfurt School’s ideals caught on like wildfire across American universities.
Marcuse then published Repressive Tolerance in 1965 as the various social movements in America were in full swing. In it, he argued that tolerance of all values and ideas meant the repression of “correct” ideas.
It was here that Marcuse coined the term “liberating tolerance.” It called for tolerance of any ideas coming from the left but intolerance of those from the right. One of the overarching themes of the Frankfurt School was total intolerance for any viewpoint but its own. That is also a basic trait of today’s political-correctness believers.
To quote Max Horkheimer, “Logic is not independent of content.”
“We will make the West so corrupt that it stinks.” Willi Münzenberg
(Münzenberg was a Communist international media propagandist based in Weimar Berlin and later in Paris)
The long liquidation of Branobel begins.
Nuclear fission was discovered in December 1938 by chemists Otto Hahn and Fritz Strassmann and physicists Lise Meitner and Otto Robert Frisch. Fission is a nuclear reaction or radioactive decay process in which the nucleus of an atom splits into two or more smaller, lighter nuclei and often other particles. The fission process often produces gamma rays and releases a very large amount of energy, even by the energetic standards of radioactive decay. Scientists already knew about alpha decay and beta decay, but fission assumed great importance because the discovery that a nuclear chain reaction was possible led to the development of nuclear power and nuclear weapons. Hahn was awarded the 1944 Nobel Prize in Chemistry for the discovery of nuclear fission.
Fermi won the 1938 Nobel Prize in Physics for his “demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons”. However, not everyone was convinced by Fermi’s analysis of his results. Ida Noddack suggested in September 1934 that instead of creating a new, heavier element 93, that:
One could assume equally well that when neutrons are used to produce nuclear disintegrations, some distinctly new nuclear reactions take place which have not been observed previously with proton or alpha-particle bombardment of atomic nuclei. In the past one has found that transmutations of nuclei only take place with the emission of electrons, protons, or helium nuclei, so that the heavy elements change their mass only a small amount to produce near neighbouring elements. When heavy nuclei are bombarded by neutrons, it is conceivable that the nucleus breaks up into several large fragments, which would of course be isotopes of known elements but would not be neighbours of the irradiated element.
It was clear to many scientists at Columbia that they should try to detect the energy released in the nuclear fission of uranium from neutron bombardment. On 25 January 1939, a Columbia University group conducted the first nuclear fission experiment in the United States, which was done in the basement of Pupin Hall. The experiment involved placing uranium oxide inside of an ionization chamber and irradiating it with neutrons, and measuring the energy thus released.
September 3, 1939 – September 2, 1945 WW11
December 7, 1941, Japan launched a devastating surprise attack on the U.S. naval facilities at Pearl Harbor, Hawaii
That very day, President Franklin D. Roosevelt penned his iconic speech to Congress asking for a declaration of war against Japan for the “unprovoked and dastardly attack.” And on December 8, Roosevelt addressed Congress and the nation, calling December 7, 1941 “a date which will live in infamy.” Congress’s response was swift, with a near-unanimous roll call vote in the House (the single dissenter was pacifist Jeannette Rankin of Montana) and a unanimous declaration of war in the Senate.
The element was first detected (1941) as the isotope plutonium-238 by American chemists Glenn T. Seaborg, Joseph W. Kennedy, and Arthur C. Wahl, who produced it by deuteron bombardment of uranium-238 in the 152-cm (60-inch) cyclotron at Berkeley, California. The element was named after the then planet Pluto. Traces of plutonium have subsequently been found in uranium ores, where it is not primeval but naturally produced by neutron irradiation.
The Holocaust was the genocide of European Jews during World War II. Between 1941 and 1945, Nazi Germany and its collaborators systematically murdered some six million Jews across German
Manhattan Project active date
On June 3, 1942, Roosevelt signed three final declarations war on the remaining Axis powers. Bulgaria, Hungary and Romania each had their own reasons for allying with Germany in 1940. Bulgaria had territorial disputes with Yugoslavia and Greece and thought Germany could provide some muscle. Hungary was afraid of being swallowed up by the Soviet Union. And Romania was ruled by fascists and antisemites who sided with the Nazis.
The Office of Strategic Services was the intelligence agency of the United States during World War II. The OSS was formed as an agency of the Joint Chiefs of Staff to coordinate espionage activities b…
FormedJune 13, 1942
Bengal Famine of 1943
Bengal was affected by a series of natural disasters late in 1942. The winter rice crop was afflicted by a severe outbreak of fungal brown spot disease, while, on 16–17 October a cyclone and three storm surges ravaged croplands, destroyed houses and killing thousands, at the same time dispersing high levels of fungal spores across the region and increasing the spread of the crop disease.
The fungus reduced the crop yield even more than the cyclone. After describing the horrific conditions he had witnessed, the mycologist S.Y. Padmanabhan wrote that the outbreak was similar in impact to the potato blight that caused the Irish Great Famine: “Though administrative failures were immediately responsible for this human suffering, the principal cause of the short crop production of 1942 was the [plant] epidemic … nothing as devastating … has been recorded in plant pathological literature”.
The Bengal cyclone came through the Bay of Bengal, landing on the coastal areas of Midnapore and 24 Parganas. It killed 14,500 people and 190,000 cattle, whilst rice paddy stocks in the hands of cultivators, consumers, and dealers were destroyed.
It also created local atmospheric conditions that contributed to an increased incidence of malaria. The three storm surges which followed the cyclone destroyed the seawalls of Midnapore and flooded large areas of Contai and Tamluk. Waves swept an area of 450 square miles (1,200 km2), floods affected 400 square miles (1,000 km2), and wind and torrential rain damaged 3,200 square miles (8,300 km2).
For nearly 2.5 million Bengalis, the accumulative damage of the cyclone and storm surges to homes, crops and livelihoods was catastrophic:
Corpses lay scattered over several thousand square miles of devastated land, 7,400 villages were partly or wholly destroyed, and standing flood waters remained for weeks in at least 1,600 villages. Cholera, dysentery and other water-borne diseases flourished. 527,000 houses and 1,900 schools were lost, over 1,000 square miles of the most fertile paddy land in the province was entirely destroyed, and the standing crop over an additional 3,000 square miles was damaged.
The cyclone, floods, plant disease, and warm, humid weather reinforced each other and combined to have a substantial impact on the aman rice crop of 1942. Their impact was felt in other aspects as well, as in some districts the cyclone was responsible for an increased incidence of malaria, with deadly effect.
October 1942: Unreliable crop forecasts
A whirlwind of catastrophic events set about the Bengal Famine of 1943. With World War II raging and Japanese imperialism growing, Bengal lost its largest trading partner in Burma. A majority of the food the Bengalis consumed was imported from Burma, but the Japanese suspended the trade. In 1942, Bengal was hit by a cyclone and three separate tidal waves. The ensuing floods destroyed 3200 square miles of viable farmland. An unpredictable fungus, destroying 90% of all rice crops in the region, then struck crops. Meanwhile, refugees fleeing the Japanese from Burma entered the region by the millions, increasing the need for food supplies. By December of 1943, 7 million Bengalis and Burmese refugees were dead due to starvation.
Churchill’s revulsion at certain Hindu social and cultural practices was genuine, but it was a far cry from this to the accusation that he deliberately starved millions of Bengali Hindus a decade later in the famous Bengal Famine of 1943. Churchill had by then returned from the backbenches to lead Britain’s War Cabinet. The famine took place at the height of the war, with the Japanese already occupying Burma and invading the British Indian province of Bengal, bombing its capital, Calcutta, and patrolling its coast with submarines. The famine raged for about six months, from the summer of 1943 till the end of that year, and estimates of its victims have ranged from half a million upwards, depending on whether one included its indirect and long-term effects.
A favourite trope of postcolonial academics and their left-liberal public has been the alleged infamy of Britain’s most cherished hero, Winston Churchill, charged with everything from casual racism to actual genocide. The worst accusation has been that of deliberately starving four million Bengalis to death in the famous famine of 1943. What is the truth?
Churchill’s first encounter with India dates as far back as the 1890s, as a young subaltern in the British Indian Army. His first impressions were not encouraging. He wrote home about “this tedious land of India” and about the “great work” Britain was doing with “her high mission to rule these primitive but agreeable races for their welfare and our own.” He travelled extensively across the subcontinent, from the Swat Valley of the North West Frontier, through the central United Provinces, down to the Deccan and Bangalore. “You could lift the heat with your hands,” he complained. “It sat on your shoulders like a knapsack, it rested on your head like a nightmare.”
How did this early aversion evolve into his deep-seated later conviction of India’s central role as the jewel in Britain’s imperial crown? His first major political encounter with India came in the immediate aftermath of the infamous Amritsar massacre in the Punjab in April 1919, when a British-led firing squad had shot dead 367 unarmed protesters and wounded thousands more. The massacre had divided British public opinion down the centre, with the Liberals, then leading a coalition government, condemning General Dyer, the British Indian officer responsible, while most Conservatives and diehard imperialists rallied to his defence. Surprisingly, given his later views, Churchill, then Liberal Secretary for War, led the House of Commons debate condemning Dyer for what he termed a “monstrous” and “un-British” atrocity, and had him recalled and cashiered from the Indian Army.
Churchill’s policies contributed to 1943 Bengal famine – study
By the time Churchill next addressed the Indian question, he had become a Conservative backbencher. The occasion was the Round Table Conferences held in London by the Baldwin government from 1931 to 1933 to try and agree a new constitutional settlement with Indian representatives. While the Indian National Congress, led by Gandhi, was demanding independence and claimed to speak for all India, its claims were hotly contested by India’s 100 million Muslims, 50 million Untouchables and 546 autonomous princes. While the British government would have liked a united India to evolve to self-governing dominion status, like Canada and Australia, the key hurdle was how to guarantee the rights of Indian minorities against majoritarian Hindu domination.
Indian citizens waiting in line at a soup kitchen.
This dilemma found Churchill leading Tory backbench resistance to what he considered a surrender to Hindu Brahmin rule by Congress. In his speeches in Parliament and up and down the country, Churchill warned that anything like dominion status, or even the promise of it, would fracture the precious unity of the subcontinent, built so painstakingly by the Raj over a century and a half. Responsible government, he argued, required levels of literacy and education remote in India for at least a generation. India, he was convinced, with its multitude of languages, castes and religions was even more diverse than Europe and thoroughly unsuited to Western-style, democratic self-government. Insofar as democracy proved incompatible with unity, India’s partition in 1947 was to prove him right.
Churchill fought a persistent rear guard action against the passing of the Government of India Act of 1935, which embodied the constitutional concessions agreed at the Round Table Conferences. While retaining separate electorates for Muslims, the Act granted provincial autonomy under popular ministries to be elected by a much expanded new electorate of 30 millions. It also held out the hope of an All-India federation if the provinces and the autonomous princely states agreed. Though unable to halt the passing of the Act, Churchill’s opposition to it, which included filibustering in parliament, effectively cut him off from holding office in the governments of either Ramsay MacDonald or Baldwin.
How far was this stand against Indian Home Rule driven by racism and prejudice? Churchill undoubtedly shared late Victorian assumptions about white, male Anglo-Saxon superiority, embodied in empire. While he expressed respect for Islam, as a fellow religion of the book, he considered Hinduism a barbaric, superstitious and idolatrous system for ensuring the dominance of the Brahmin caste, the perpetuation of untouchability and the subjugation of women. His very genuine concern for the rights of women is evident from his correspondence with Katherine Mayo, the American feminist whose book, Mother India, castigating the oppression of Indian women, provoked a hostile nationalist outcry in India. Churchill wrote to Mayo applauding her book and publicly endorsed her condemnation of Hindu-sanctioned child brides.
The Rotary Club relief committee at a free kitchen in Kolkata in 1943.
Churchill’s revulsion at certain Hindu social and cultural practices was genuine, but it was a far cry from this to the accusation that he deliberately starved millions of Bengali Hindus a decade later in the famous Bengal Famine of 1943. Churchill had by then returned from the backbenches to lead Britain’s War Cabinet. The famine took place at the height of the war, with the Japanese already occupying Burma and invading the British Indian province of Bengal, bombing its capital, Calcutta, and patrolling its coast with submarines. The famine raged for about six months, from the summer of 1943 till the end of that year, and estimates of its victims have ranged from half a million upwards, depending on whether one included its indirect and long-term effects.
So how has a 67-year-old British Prime Minister in poor health, five thousand miles away, fighting near annihilation in a world war, come to be charged with causing such a cataclysmic disaster? The attempt to lay this at Churchill’s door stems from a sensationalist book published in 2010 by a Bengali American journalist called Madhusree Mukerjee. As its title, Churchill’s Secret War, indicates, it was a largely conspiracist attempt to pin responsibility on distant Churchill for undoubted mistakes on the ground in Bengal. The actual evidence shows that Churchill believed, based on the information he had been getting, that there was no food supply shortage in Bengal, but a demand problem caused by local mismanagement of the distribution system.
Winston Churchill in 1940. Britain’s wartime leader has been quoted as blaming the famine on the fact Indians were ‘breeding like rabbits’. Photograph: Cecil Beaton/IWM via Getty Images
This was largely the result of wartime supply constraints, with most of Bengal’s boats commandeered or disabled, and uneasy relations between the elected, Muslim-led, coalition government of Bengal and its largely Hindu grain merchants, notorious for hoarding and speculation.
Churchill had resisted the 1935 constitution granting Indian provinces autonomy; but, by the 1940s, he regarded the food situation in Bengal as primarily a matter for its elected ministry rather than Whitehall. Within the War Cabinet itself, Churchill’s role was one of broad oversight, rather than detailed management, so the idea that he had much influence on actual relief aid to Bengal is far-fetched, especially at the height of the war.
1943 had begun as a year of normal harvest, leaving the Bengal ministry sanguine about food supplies and the Viceroy’s government in Delhi reluctant to intervene, using its reserve powers. Once the Viceroy did intervene, the famine was rapidly brought under control and petered out within months. And it was Churchill who replaced the lethargic Lord Linlithgow as viceroy with the efficient and politically sensitive Field Marshal Wavell.
Even Mukerjee never blames Churchill for actually causing the Bengal Famine, but for compounding it by refusing to allow shipments of grain from Australia and Canada, bound for Europe, to be diverted to Bengal. One has only to look at a map to see what a nonsense it would have been for Australian ships bound for Europe to come anywhere near the Bay of Bengal and run the gauntlet of Japanese submarines.
The true facts about food shipments to Bengal, amply recorded in the British War Cabinet and Government of India archives, are that more than a million tons of grain arrived in Bengal between August 1943, when the War Cabinet first realised the severity of the famine, and the end of 1944, when the famine had petered out. This was food aid specifically sent to Bengal, much of it on Australian ships, despite strict food rationing in England and severe food shortages in newly liberated southern Italy and Greece. The records show that, far from seeking to starve India, Churchill and his cabinet sought every possible way to alleviate the suffering without undermining the war effort.
On 4 August 1943, when the War Cabinet chaired by Churchill first realised the enormity of the famine, it agreed that 150,000 tons of Iraqi barley & Australian wheat should be sent to Bengal, with Churchill himself insisting on 24 September that “something must be done.” Though emphatic “that Indians are not the only people who are starving in this war,” he agreed to send a further 250,000 tons, to be shipped over the next four months.
An emaciated family who arrived in Kolkata in search of food in November 1943
On 7 October, Churchill told the War Cabinet that one of the new viceroy’s first duties was to see to it “that famine and food difficulties were dealt with.” He wrote to Wavell the next day: “Every effort must be made, even by the diversion of shipping urgently needed for war purposes, to deal with local shortages.” By January 1944, Bengal had received a total of 130,000 tons of barley from Iraq, 80,000 tons of wheat from Australia and 10,000 from Canada, followed by a further 100,000 from Australia. Then, on 14 February 1944, Churchill called an emergency meeting of the War Cabinet to see if more food aid could be sent to Bengal without wrecking Allied plans for the coming Normandy landings.
On 24 April 1944, the Cabinet minutes recorded: “the Prime Minister said that it was clear that His Majesty’s Government could only provide further relief for the Indian situation at the cost of incurring grave difficulties in other directions. At the same time, his sympathy was great for the sufferings of the people of India.” These were not empty words. A few days later, Churchill asked US President Roosevelt for shipping to supply Bengal, saying he was “seriously concerned” about the famine, that Wavell still needed a million extra tons of grain, that the wheat was available in Australia, but without ships to transport it. The request was refused by the US Administration on the grounds that it needed all its shipping to supply the Pacific theatre and the impending D Day landings.
Despite such obstacles, by the end of 1944 Wavell’s much requested one million additional tons had been secured from Australia and the allied South East Asia Command and shipped to Bengal. To Churchill must go the credit for appointing in October 1943 the man arguably most responsible for these successes. British India’s most able and conscientious viceroy, Field Marshal Archibald Wavell, with his long and distinguished record of service in India, his intimate knowledge of its peoples and languages and his experience of large-scale military logistics, was just the person to halt the Bengal famine in its tracks, drafting in the army to get food supplies moving quickly from surplus to deficit areas.
Much of the case against Churchill rests not on his actions, but on his words; namely, his various racist comments about Indians, and Bengalis in particular. Most of these have been taken out of context. Churchill was certainly no friend to Indian nationalist leaders, most of whom he regarded as moralising humbugs. He was an unashamed imperialist, like many of his generation, and staunchly committed to maintaining India’s unity within the British Empire. He had a strongly held conviction that too sudden and rapid a move to democracy and independence would tear the subcontinent apart on sectarian lines, a fear that events would justify.
On the other hand, Churchill repeatedly voiced his admiration for the gallantry of Indian troops, noting in his war memoirs: “The unsurpassed bravery of Indian soldiers and officers, both Moslem and Hindu, shine for ever in the annals of war. Upwards of two and a half million Indians volunteered to serve in the forces… The response of the Indian peoples, no less than the conduct of their soldiers, makes a glorious final page in the story of our Indian Empire.”
Despite his fears about Indian independence, Churchill’s views noticeably mellowed over the years. On 5 Feb 1942, the Prime Minister proposed to Cabinet that he personally visit India to make its Congress leaders an offer of post-war independence, in return for their supporting the war effort. But the Prime Minister could not be spared, and instead the War Cabinet sent out its senior Labour minister, Sir Stafford Cripps, known for his friendship with Congress leaders. When the Cripps mission failed to meet their demand for immediate independence, Congress launched the Quit India movement of civil disobedience against the Raj and resolved to offer only passive resistance to the Japanese invasion.
On being informed of this in September 1942, an apoplectic Churchill exclaimed to Leopold Amery, his Secretary of State for India: “I hate Indians. They are a beastly people with a beastly religion.” He was referring to Hinduism, rather than Islam, given loyal support for the war effort from the Muslim League. Churchill saw Gandhi’s decision to launch the Quit India movement in the middle of the war as a stab in the back when Britain most needed and deserved loyal support. He also (like many Indian liberals and socialists) saw Gandhi’s frequent resort to political fasts as a form of emotional blackmail. And he was appalled, as were many Congressmen, by extreme nationalists like the Bengali leader Subhas Chandra Bose joining hands with Hitler and the Japanese, a fact not calculated to endear Bengalis in general to Churchill.
Churchill’s abusive comments about Gandhi, Indians and Bengalis need to be seen in that context. They also need to be seen in the context of his penchant for making outrageous comments that he didn’t really mean in order to shock or tease. The long-suffering butt of many such remarks was Leopold Amery, who had also been his childhood friend. Churchill used to rag him when they were at school together at Harrow and once even threw him into a swimming pool fully clothed. Amery grew up into a worthy but rather longwinded and tedious speaker with a talent for boring his listeners at Cabinet meetings. Winston liked to interrupt Amery’s long perorations on India with racist jokes designed to shock him and cut him short. Amery was not amused and once responded by likening Churchill’s language to Hitler’s. None of this was meant to be taken very seriously, but Amery made a habit of writing it all down rather solemnly in his diaries.
In July 1944, over lunch with the Indian statesman Sir Ramaswamy Mudaliar, a member of the War Cabinet, Churchill was heard assuring him that the old notion that the Indian was in any way inferior to the White man must disappear. “We must all be pals together,” the Prime Minister declared.
“I want to see a great shining India, of which we can be as proud as we are of a great Canada or a great Australia.” Referring to India’s rapidly growing population, Churchill also remarked: “It was only thanks to the beneficence and wisdom of British rule in India, free from any hint of war for a longer period than almost any other country in the world, that India had been able to increase and multiply to this astonishing extent.” On another occasion, he proudly told the Spanish Ambassador to London, “Since the English occupation of India the native population has increased by two hundred million,” and he contrasted this with the near-extinction of American Indians, a comparison he was fond of making on his trips to the US. Whatever the merits of India’s population explosion under stable British rule, these were hardly the sentiments of someone willing genocide by starvation on the Indian people.
Fort Detrick is a United States Army Futures Command installation located in Frederick, Maryland.
Fort Detrick was the center of the U.S. biological weapons program from 1943 to 1969
One of the most well-known victims of the MK-ULTRA experiments was Frank Olson. Olson was a CIA officer who had spent his entire career at Detrick and knew its deepest secrets. When he began musing about quitting the CIA, his comrades saw a security threat. Gottlieb summoned the team to a retreat and arranged for Olson to be drugged with LSD.
Did CIA Scientist Frank Olson Jump To His Death, Or Was He Pushed
Family tragedy led me to write The Coldest Warrior.
My uncle Frank Olson died sometime around 2:30 a.m. on Nov. 28, 1953 when he “jumped or fell” from his room on the 13th floor of the Statler Hotel in New York City. The New York Medical Examiner’s report contained that ambiguous description of how Frank came to land on the sidewalk early that morning. Frank Olson was a highly skilled Army scientist who worked at Fort Detrick in Frederick, Maryland, a top-secret U.S. Army facility that researched biological warfare agents. He had gone to New York to see a security-cleared psychiatrist in the company of a CIA escort.
Olson’s sealed casket was delivered to his wife, my aunt, two days later. She was discouraged from viewing the body because, she was told, he had suffered disfiguring facial injuries. Olson was buried the next day. She received an expedited pension shortly after that. That was all the family knew for 22 years.
Then, in June 1975, one bit of new information came to light. Buried inside a report by The Rockefeller Commission, which had been established by President Gerald Ford to investigate allegations of illegal CIA activity within the U.S., was a two-paragraph account of an army scientist who had been unwittingly given LSD and died in a fall from a hotel window in New York. The similarity of the case drew the family’s attention and, after consulting the CIA, the Army confirmed the scientists was Frank Olson. Headlines followed in The New York Times and The Washington Post. “Suicide Revealed.”
Within 10 days the family was sitting in the Oval Office receiving an unprecedented personal apology from the president of the United States for Olson’s wrongful death. Within a year, the family received a $750,000 monetary settlement for which they had to sign a broad release of claims against the U.S. government.
The case might have ended there, but Eric Olson, Frank’s eldest son, became increasingly uncomfortable with the official narrative. He had his father exhumed in 1994 by a respected forensic pathologist who found no disfiguring facial injuries. But he did find a suspicious hematoma on Olson’s left temple, which led him to conclude that Olson had been stunned by a blow to the head in the hotel room. To the conflicting theories that Frank Olson “jumped or fell” another possibility was added: He had been thrown out the window.
In the years that followed, information about the nature of Olson’s work came to light. As Acting Chief, Special Operations Division at Fort Detrick, he was liaison to the CIA’s Technical Services Staff, the agency’s R&D unit, which gave him visibility into some of the CIA’s most sensitive operations. He was involved in, or aware of, the use of anthrax against North Korean civilian populations; top secret programs Artichoke and MKULTRA, which involved extreme interrogation techniques; cooperation with Japanese and Nazi war criminals to benefit from their banned medical research methods. Olson was a man who knew some of the CIA’s darkest secrets.
Slowly, over time, with a drip of information assembled by Eric, a new narrative emerged. Olson was a man who’d grown uncomfortable with the nature of his work, showed signs of being a security risk, and then was made unstable when drugged with LSD at an offsite business meeting intended to test his trustworthiness. He had become a man who knew too much.
This revised narrative shed new light on earlier events. Olson’s rushed burial and the expedited pension approval were meant to stop the family from asking questions. Later, the speedy presidential apology, the $750,000 payment, and their waiver of claims conspired to continue the cover-up.
Frank Olson’s death has come to embody our collective fascination with the Cold War’s darkest secrets, and it has shone a bright light on the dubious privileges men in the CIA gave themselves in the name of national security. The murkiness of the case, even at a distance of 66 years, still attracts great interest. Errol Morris explored the case in his 2018 Netflix mini-series, Wormwood, and award-winning former New York Times reporter Stephen Kinzer addressed the case in his 2019 bestselling Poisoner-In-Chief, a biography of Sidney Gottlieb, Olson’s CIA boss.
“Olson’s death remains officially classified as “undetermined,” but all the evidence points toward murder and none of it points away.”
The Olson case continues to interest us for many of the same reasons that we are drawn to the murder of Jamal Khashoggi, the Washington Post reporter and Saudi citizen executed in Turkey by agents of the Saudi government, and former FSB officer Aleksandr Litvinenko, poisoned with radioactive polonium-210 in 2006 on orders from the Kremlin. Extra-judicial executions have one thing in common: The killings are done by individuals acting under orders from government authorities without sanction of any legal process.
Olson’s death remains officially classified as “undetermined,” but all the evidence that has emerged over several decades points toward murder and none of it points away. Like a black hole, existence is proven by evidence that points to existence and not by direct observation. Among the things that have become known:
- Three months before his death Olson spent two weeks with his brother-in-law (my father) reroofing a family cabin in the Adirondacks. My father saw a man who was in a deep moral crisis. He wasn’t suicidal. He was a man who had begun reading the Bible to find answers to disturbing questions.
- On Feb. 23, 1954 (three months after Olson’s death), the CIA and the Department of Justice signed a Memorandum of Understanding that allowed the CIA to withhold information relating to criminal activity if disclosure compromised intelligence sources and methods. In 1975, Senator Bella Abzug questioned Lawrence Houston, CIA general counsel at the time of Olson’s death, and an author of the memo. She asked, with specific reference to Olson, “In other words, the Memorandum of Understanding, in your judgment, gave authority to the CIA to make decisions to give immunity to individuals who happened to work for the CIA for all kinds of crimes, including murder.” Houston answered, “Yes.”
- Mossad, which started using “targeted killings” in 1962, for decades included the death of Frank Olson in its assassination training program as an example of the perfect murder—“perfect” due to the skill with which it had been made to look like a suicide.
I caught up with Stephen Kinzer over lunch last fall after he became aware of my new novel, The Coldest Warrior, which is based on the Frank Olson case. The novel, unlike Kinzer’s chapter on the case, treats the death of the Olson character (renamed Wilson) as a murder. I tell the story from within the CIA—murder, cover-up, and a power struggle among CIA factions trying to deal with the repercussions of the case. The novel puts a human face on the Cold War by showing the psychological burdens of its characters. Honorable men who work in covert operations inevitably bring some of the darkness into themselves, suffering the moral hazards of a line of work that sanctions lying, deceit, and murder. Doubt and paranoia are bred in a culture of secrecy, as is a sophisticated amorality in men at the top of the intelligence bureaucracy.
Kinzer’s account of the Olson case stops at the precipice of knowledge—where every account has stopped because that is where the trail of evidence ends. The rules of journalism don’t reward speculation. Toward the end of our lunch, Kinzer agreed that the CIA was capable of murder in 1953; he agreed that the CIA believed the top-secret projects Olson knew of, if made public, would threaten national security and embarrass the agency; and he agreed that the CIA believed Olson was a security risk.
Kinzer paused at the end of our discussion. “In my gut, he was murdered.” Gut. The instinct for truth.
I wrote The Coldest Warrior with the freedom that fiction enjoys to imagine the world beyond the precipice of knowledge. Albert Camus said it well: “Fiction is the lie through which we tell the truth.”
Today, it’s a cutting-edge lab. In the 1950s and 1960s, it was the center of the U.S. government’s darkest experiments.
In 1954, a prison doctor in Kentucky isolated seven black inmates and fed them “double, triple and quadruple” doses of LSD for 77 days straight. No one knows what became of the victims. They may have died without knowing they were part of the CIA’s highly secretive program to develop ways to control minds — a program based out of a little-known Army base with a dark past, Fort Detrick.
Suburban sprawl has engulfed Fort Detrick, an Army base 50 miles from Washington in the Maryland town of Frederick. Seventy-six years ago, however, when the Army selected Detrick as the place to develop its super-secret plans to wage germ warfare, the area around the base looked much different. In fact, it was chosen for its isolation. That’s because Detrick, still thriving today as the Army’s principal base for biological research and now encompassing nearly 600 buildings on 13,000 acres, was for years the nerve center of the CIA’s hidden chemical and mind control empire.
Detrick is today one of the world’s cutting-edge laboratories for research into toxins and antitoxins, the place where defenses are developed against every plague, from crop fungus to Ebola. Its leading role in the field is widely recognized. For decades, though, much of what went on at the base was a closely held secret. Directors of the CIA mind control program MK-ULTRA, which used Detrick as a key base, destroyed most of their records in 1973. Some of its secrets have been revealed in declassified documents, through interviews and as a result of congressional investigations. Together, those sources reveal Detrick’s central role in MK-ULTRA and in the manufacture of poisons intended to kill foreign leaders.
In 1942, alarmed by reports that Japanese forces were waging germ warfare in China, the Army decided to launch a secret program to develop biological weapons. It hired a University of Wisconsin biochemist, Ira Baldwin, to run the program and asked him to find a site for a new bio-research complex. Baldwin chose a mostly abandoned National Guard base below Catoctin Mountain called Detrick Field. On March 9, 1943, the Army announced that it had renamed the field Camp Detrick, designated it as headquarters of the Army Biological Warfare Laboratories and purchased several adjacent farms to provide extra room and privacy.
After World War II, Detrick faded in importance. The reason was simple: The United States had nuclear weapons, so developing biological ones no longer seemed urgent. As the Cold War began, however, two seemingly unrelated developments on opposite sides of the world stunned the newly created Central Intelligence Agency and gave Detrick a new mission.
The first was the show trial of the Roman Catholic Primate of Hungary Joseph Cardinal Mindszenty for treason in 1949. At the trial, the cardinal appeared disoriented, spoke in a monotone and confessed to crimes he had evidently not committed. Then, after the Korean War ended, it was revealed many American prisoners had signed statements criticizing the United States and, in some cases, confessing to war crimes. The CIA came up with the same explanation for both: brainwashing. Communists, the CIA concluded, must have developed a drug or technique that enabled them to control human minds. No evidence of this ever emerged, but the CIA fell hard for the fantasy.
In the spring of 1949 the Army created a small, super-secret team of chemists at Camp Detrick called the Special Operations Division. Its assignment was to find military uses for toxic bacteria. The coercive use of toxins was a new field, and chemists at the Special Operations Division had to decide how to begin their research.
At the same time, CIA had just established its own corps of chemical magicians. CIA officers in Europe and Asia were regularly capturing suspected enemy agents and wanted to develop new ways to draw prisoners in interrogation away from their identities, induce them to reveal secrets and perhaps even program them to commit acts against their will. Allen Dulles, who ran the CIA’s covert-operations directorate and would soon be promoted to direct the agency, considered his mind control project — first named Bluebird, then Artichoke, then MK-ULTRA — to be of supreme importance, the difference between the survival and extinction of the United States.
In 1951, Dulles hired a chemist to design and oversee a systematic search for the key to mind control. The man he chose, Sidney Gottlieb, was not part of the silver-spoon aristocracy from which most officers of the early CIA were recruited, but a 33-year-old Jew from an immigrant family who limped and stuttered. He also meditated, lived in a remote cabin without running water and rose before dawn to milk his goats.
Gottlieb wanted to use Detrick’s assets to propel his mind control project to new heights. He asked Dulles to negotiate an accord that would formalize the connection between the military and the CIA in this pursuit. Under the arrangement’s provisions, according to a later report, “CIA acquired the knowledge, skill, and facilities of the Army to develop biological weapons suited for CIA use.”
Taking advantage of this arrangement, Gottlieb created a hidden CIA enclave inside Camp Detrick. His handful of CIA chemists worked so closely with their comrades in the Special Operations Division that they became a single unit.
Some scientists outside the tight-knit group suspected what was happening. “Do you know what a ‘self-contained, off-the-shelf operation’ means?” one of them asked years later. “The CIA was running one in my lab. They were testing psychochemicals and running experiments in my labs and weren’t telling me.”
Gottlieb searched relentlessly for a way to blast away human minds so new ones could be implanted in their place. He tested an astonishing variety of drug combinations, often in conjunction with other torments like electroshock or sensory deprivation. In the United States, his victims were unwitting subjects at jails and hospitals, including a federal prison in Atlanta and an addiction research center in Lexington, Kentucky.
In Europe and East Asia, Gottlieb’s victims were prisoners in secret detention centers. One of those centers, built in the basement of a former villa in the German town of Kronberg, might have been the first secret CIA prison. While CIA scientists and their former Nazi comrades sat before a stone fireplace discussing the techniques of mind control, prisoners in basement cells were being prepared as subjects in brutal and sometimes fatal experiments.
These were the most gruesome experiments the U.S. government ever conducted on human beings. In one of the them, seven prisoners in Lexington, Kentucky, were given multiple doses of LSD for 77 days straight. In another, captured North Koreans were given depressant drugs, then dosed with potent stimulants and exposed to intense heat and electroshock while they were in the weakened state of transition. These experiments destroyed many minds and caused an unknown number of deaths. Many of the potions, pills and aerosols administered to victims were created at Detrick.
One of the most well-known victims of the MK-ULTRA experiments was Frank Olson. Olson was a CIA officer who had spent his entire career at Detrick and knew its deepest secrets. When he began musing about quitting the CIA, his comrades saw a security threat. Gottlieb summoned the team to a retreat and arranged for Olson to be drugged with LSD. A week later, Olson died in a plunge from a hotel window in New York. The CIA called it suicide. Olson’s family believes he was thrown from the window to prevent him from revealing what was brewing inside Camp Detrick.
A decade of intense experiments taught Gottlieb that there are indeed ways to destroy a human mind. He never, however, found a way to implant a new mind in the resulting void. The grail he sought eluded him. MK-ULTRA ended in failure in the early 1960s. “The conclusion from all these activities,” he admitted afterward, “was that it was very difficult to manipulate human behavior in this way.”
Nonetheless Fort Detrick, as it was renamed in 1956, remained Gottlieb’s chemical base. After the end of MK-ULTRA, he used it to develop and store the CIA’s arsenal of poisons. In his freezers, he kept biological agents that could cause diseases including smallpox, tuberculosis and anthrax as well as a number of organic toxins, including snake venom and paralytic shellfish poison. He developed poisons intended to kill Cuban leader Fidel Castro and Congolese leader Patrice Lumumba.
During this period, Fort Detrick’s public profile rose uncomfortably. No one knew the CIA was making poisons there, but its role as the country’s principal center for research into biological and anti-crop warfare became clear. From mid-1959 to mid-1960, protesters convened once a week at the gate. “No rationalization of ‘defense’ can justify the evil of mass destruction and disease,” they wrote in a statement.
In 1970, President Richard Nixon ordered all government agencies to destroy their supplies of biological toxins. Army scientists complied. Gottlieb hesitated. He had spent years assembling this deadly pharmacopeia and did not want to destroy it. After meeting with CIA Director Richard Helms, he reluctantly agreed that he had no choice.
One batch, a supremely potent shellfish poison known as saxitoxin, escaped destruction, though. Two canisters containing nearly 11 grams of saxitoxin — enough to kill 55,000 people — were in Gottlieb’s depot at Fort Detrick. Before Army technicians could remove them, two officers from the Special Operations Division packed them into the trunk of a car and drove them to the Navy Bureau of Medicine and Surgery in Washington, where the CIA maintained a small chemical warehouse. One of Gottlieb’s aides later testified that he had ordered this operation without informing his boss. By the time the saxitoxin was discovered and destroyed in 1975, Gottlieb had retired.
Gottlieb was the most powerful unknown American of the 20th century — unless there was someone else who conducted brutal experiments across three continents and had a license to kill issued by the U.S. government. Detrick, his indispensable base, still contains untold stories of the cruelty that began there — just 50 miles from the center of the government that has kept them sealed for decades.
Building 470 on the campus of Fort Detrick in Frederick, Md
How vintage actress Dorothy Lamour sold more than $300 million in war bonds during WWII
Dorothy Lamour (1914-1996) the famous American actress and singer most popular in the 1940s, wasn’t just a star on the screen — she also threw her weight behind numerous WWII war bond sales efforts, and handily topped those charts.
On 6 and 9 August 1945, the United States detonated two atomic bombs over the Japanese cities of Hiroshima and Nagasaki, respectively. The aerial bombings together killed between 129,000 and 226,000 people, most of whom were civilians, and remain the only use of nuclear weapons in an armed conflict. Japan surrendered to the Allies on 15 August, six days after the bombing of Nagasaki and the Soviet Union’s declaration of war against Japan and invasion of Japanese-occupied Manchuria
The UN was established after World War II with the aim of preventing future wars, succeeding the ineffective League of Nations.On 25 April 1945, 50 governments met in San Francisco for a conference and started drafting the UN Charter, which was adopted on 25 June 1945 and took effect on 24 October 1945, when the UN began operations.
On World Invocation Day, 1952, Eleanor Roosevelt, a pioneering force in the passage of the Declaration of Human Rights at the United Nations, and wife of President Franklin D. Roosevelt, recorded a brief message which included the Great Invocation. The message was recorded by Mrs Roosevelt at the United Nations.
She served as the first Chairperson of the UN Human Rights Commission and played an instrumental role in drafting the Universal Declaration of Human Rights. At a time of increasing East-West tensions, Mrs. Roosevelt used her enormous prestige and credibility with both superpowers to steer the drafting process towards its successful completion.
The Lucis Trust
The Lucis Trust is the Publishing House which prints and disseminates United Nations material. It is a devastating indictment of the New Age and Pagan nature of the UN. Lucis Trust was established in 1922 as “ Lucifer Trust” by Alice Bailey as the publishing company to disseminate the books of Bailey and Blavatsky and the Theosophical Society. The title page of Alice Bailey’s book, ‘Initiation, Human and Solar’ was originally printed in 1922, and clearly shows the publishing house as ‘ Lucifer Publishing CoIn 1923.’
Bailey changed the name to Lucis Trust, because Lucifer Trust revealed the true nature of the New Age Movement too clearly.
At one time, the Lucis Trust office in New York was located at 666.
The roots of the Lucis Trust trace back to a woman who is credited for inspiring Hitler’s obsession with the occult, the 19th century author and occultist Helena Blavatsky, who discipled the founder of Lucis Trust, Alice Bailey.
Blavatsky was a Russian immigrant and self-proclaimed levitating psychic who founded the occult and spiritualist movement called Theosophy, which became wildly popular, especially in Germany.
She is “widely considered the ‘mother’ of New Age spirituality as well as a touchstone in the development of Nazi paganism and the chief popularizer of the swastika as a mystical symbol,” writes Jonah Goldberg in National Review.
“The occult revival in Germany and in Europe in general in the late nineteenth and early twentieth centuries led to a remarkable growth of Theosophic lodges as well as other occult groups,” writes author C. M. Vasey. He says there are “parallels between Blavatsky’s esoteric thought and Hitler’s racial ideology” and connects Blavatsky’s “Cyclopean eye” and the elevation of the Aryan race with themes later championed by Hitler.
Blavatsky hardly concealed the true focus of her occult leanings. In an 1885 book, she writes: “It is but natural . . . to view Satan, the Serpent of Genesis, as the real creator and benefactor, the Father of Spiritual mankind.” In another passage she extols “the ‘Harbinger of Light,’ bright radiant Lucifer,” who opened the eyes of Adam.
Blavatsky’s Theosophy magazine was edited by Alice Bailey, who called the levitating admirer of Satan her mentor, dedicating one of her books to Blavatsky, “that great disciple who lighted her torch in the east and brought the light to Europe and America.” Bailey went on to found the Lucis Trust, and Blavatsky’s works are promoted on the Lucis Trust website.
The Lucis website displays a great many of Bailey’s beliefs on their site. But certain Alice Bailey teachings listed on the site a few years ago are discoverable now only on the Wayback machine.
In 2000, the site noted that Bailey and her followers at Lucis look for the “return of the ‘World Teacher,’ the Coming One Who will return to lead humanity into a new age and into a heightened consciousness.” In 1998, the website had another interesting sentence: “Today the reappearance of the World Teacher, the Christ, is expected by millions, not only by those of Christian faith but by those of every faith who expect the Avatar under other names — the Lord Maitreya, Krishna, Messiah, Imam Mahdi and the Bodhisattva.”
Lucifer Publishing Company 1920
Founded as the Lucifer Publishing Company in the early 1920s, the name was changed in 1925 to the Lucis Publishing Company. In Latin lucern ferre translates to “light-bearer” and lucis means of light. The Lucis Trust has always maintained the same name. It has headquarters in New York City, London, and Geneva.
The objectives of the Lucis Trust as stated in its charter are: “To encourage the study of comparative religion, philosophy, science and art; to encourage every line of thought tending to the broadening of human sympathies and interests, and the expansion of ethical religious and educational literature; to assist or to engage in activities for the relief of suffering and for human betterment; and, in general, to further worthy efforts for humanitarian and educational ends.”
The World Goodwill group, founded in 1932, has been recognized by the United Nations as a Non-Governmental Organization (NGO), and is represented during regular briefing sessions for NGOs at the United Nations. The Lucis Trust has consultative status (roster level) with the United Nations Economic and Social Council.
The Trust is established in Great Britain under the title “Lucis Trust Ltd.”, in Switzerland as “Lucis Trust Association”, and in Holland as the “Lucis Trust Stichting.”
THE OCCULTIC UNITED NATIONS, LUCIS TRUST, AND THE FLAT EARTH CONNECTION
It may surprise you that the United Nations was founded by the occult and has a flat earth map on their official logo. What might surprise you even more is that there is a Satanic library inside the United Nations building called Lucis Trust, formerly known as Lucifer Trust. It was founded by Alice Bailey who was a new age occultist.
“Lucis Trust is the Publishing House which prints and disseminates United Nations material. It is a devastating indictment of the New Age and Pagan nature of the UN. Lucis Trust was established in 1922 as Lucifer Trust by Alice Bailey as the publishing company to disseminate the books of Bailey and Madame Blavatsky and the Theosophical Society. Due to public outrage over the creepy name of the publishing company, it was changed one year later to Lucis Trust.”(1)
“The title page of Alice Bailey’s book, Initiation, Human and Solar was originally printed in 1922, and clearly shows the publishing house as: Lucifer Publishing CoIn 1923. Bailey changed the name to Lucis Trust, because Lucifer Trust revealed the true nature of the New Age Movement too clearly.”
Bikini Atoll consisted of the detonation of 23 nuclear weapons by the United States between 1946 and 1958 on Bikini Atoll in the Marshall Islands.
The official birthday of the US Air Force is 18 September 1947.” On 18 September 1947, the Army Air Forces became the United States Air Force as a separate and equal element of the United States Armed Forces
Formed September 18, 1947 CIA
Between 1949 and 1963, the Soviets pounded an 18,500-square-kilometre patch of land known as the Polygon with more than 110 above-ground nuclear tests. Kazakh health authorities estimate that up to 1.5 million people were exposed to fallout in the process. Underground tests continued until 1989.
NATO (North Atlantic Treaty Organization)
NATO was established on 4 April 1949 via the signing of the North Atlantic Treaty (Washington Treaty). The 12 founding members of the Alliance were: Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom, and the United States.
The Korean War was fought between North Korea and South Korea from 1950 to 1953. The war began on 25 June 1950 when North Korea invaded South Korea following clashes along the border and rebellions in South Korea. North Korea was supported by China and the Soviet Union while South Korea was supported by the United States and allied countries. The fighting ended with an armistice on 27 July 1953
The Korean War is often called “The Forgotten War” due to its marginalization in the historical record. However, the war would have a dramatic effect on the United States and its foreign policy in future decades.
The Korean war began on June 25, 1950, when some 75,000 soldiers from the North Korean People’s Army poured across the 38th parallel, the boundary between the Soviet-backed Democratic People’s Republic of Korea to the north and the pro-Western Republic of Korea to the south. This invasion was the first military action of the Cold War. By July, American troops had entered the war on South Korea’s behalf.
As far as American officials were concerned, it was a war against the forces of international communism itself. After some early back-and-forth across the 38th parallel, the fighting stalled and casualties mounted with nothing to show for them. Meanwhile, American officials worked anxiously to fashion some sort of armistice with the North Koreans. The alternative, they feared, would be a wider war with Russia and China–or even, as some warned, World War III. Finally, in July 1953, the Korean War came to an end.
In all, some 5 million soldiers and civilians lost their lives in what many in the U.S. refer to as “the Forgotten War” for the lack of attention it received compared to more well-known conflicts like World War I and II and the Vietnam War. The Korean peninsula remains divided tod
View of the Rocky Flats plant looking west in 1995. After 38 years, weapons production ceased in 1989. In 1992, the plant mission changed from weapons production to environmental cleanup and restoration. By 1995, the site had begun to be dismantled.
The construction of the Rocky Flats Plant started in 1951 to the northwest of Arvada, Colorado.
Cloaked in secrecy, and then in controversy, the Rocky Flats plant west of Denver manufactured plutonium triggers during the Cold War. Later, when investigations and a FBI raid in 1989 showed the extent of nuclear contamination, the facility was shuttered and became a Superfund site.
The Shippingport Atomic Power Plant, built in 1954, signified Eisenhower’s follow-through on his promise to alter the use of atomic technology for peaceful rather than military purposes. This plant was the first large-scale commercial nuclear power plant in the United States, built just 40 miles from Pittsburgh in Beaver County.
KAMCHATKA PENINSULA, RUSSIA, 1952
This was the world’s first earthquake in history with a recorded magnitude of 9 on the Richter scale. It occurred in a coastal region and so triggered a tsunami that reached a height of about 14 meters. Apart from causing damage locally, it reached the coast of California, though the damage across the Atlantic was limited. Even before this massive earthquake hit, this region in Russia had a history of tectonic activity and many active volcanoes making it vulnerable to natural disasters.
- Vietnam began November 1, 1955
- Ended April 30, 1975
United States involvement in the Vietnam War began shortly after the end of World War II in Asia, first in an extremely limited capacity and escalating over a period of 20 years. The U.S. military presence peaked in April 1969, with 543,000 American combat troops stationed in Vietnam.
By the conclusion of the United States’ involvement in 1973, over 3.1 million Americans had been stationed in Vietnam.
The Great Leap Forward was a five-year plan of forced agricultural collectivization and rural industrialization that was instituted by the Chinese Communist Party in 1958, which resulted in a sharp contraction in the Chinese economy and between 30 to 45 million deaths by starvation, execution, torture, forced labor, and suicide out of desperation.
The National Aeronautics and Space Administration is an independent agency of the U.S. federal government responsible for the civil space program, aeronautics research, and space research. NASA nasa.gov
FormedJuly 29, 1958
Originally known as the Advanced Research Projects Agency (ARPA), the agency was created on February 7, 1958 by President Dwight D. Eisenhower in response to the Soviet launching of Sputnik 1 in 1957
The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use b… darpa.mil
Formed February 7, 1958 (as ARPA)
This earthquake is the worst earthquake in history and had a magnitude of 9.5 on the Richter scale where the maximum value if 10. Around 1,655 people were killed, thousands injured and millions displaced because of this earthquake and it also led to property damage worth $550 million. The quake triggered a tsunami that killed 61 people in Hawaii, 138 in Japan and 32 in the Philippines.
The 1960 Valdivia earthquake struck Chile on the afternoon of 22nd May 1960, causing widespread damage with Valdivia being the most affected city. Also known as the Great Chilean earthquake, it was a 9.5 MW earthquake making it the most powerful ever recorded till date. The tremor triggered a giant tsunami which caused destruction and deaths as far away as Japan, around a day later. Know about the cause, effects, damage and death toll of the 1960 earthquake in Chile through these 10 interesting facts.
#1 THE GREAT CHILEAN EARTHQUAKE WAS A MEGATHRUST EARTHQUAKE
Subduction is a process that occurs at convergent boundaries of tectonic plates due to the movement of one plate under another. Regions where it occurs are known as subduction zones. Subduction zones produce the strongest earthquakes on earth as their structure allows more stress to accumulate before the energy is released. Earthquakes occurring at subduction zones are known as megathrust earthquakes and nine of the ten most powerful earthquakes in the twentieth century were megathrust, including the 1960 Chile earthquake.
Diagram explaining Subduction which is responsible for Megathrust earthquakes
#2 IT WAS CAUSED DUE TO THE SUBDUCTION OF NAZCA PLATE UNDER THE SOUTH AMERICAN PLATE
The South American Plate is a tectonic plate which includes the continent of South America and a sizable region of the Atlantic Ocean seabed. The Nazca Plate is an oceanic tectonic plate in the eastern Pacific Ocean basin off the west coast of South America. The Chile earthquake of 1960 was caused by the release of mechanical stress due to the subduction of the Nazca Plate under the South American Plate along the Chile-Peru Trench, which has been the cause of many other earthquakes too.
Diagram of Nazca Plate subducting beneath the South American plate
#3 IT WAS PRECEDED BY THE 1960 CONCEPCIÓN EARTHQUAKES
Before the Great Chilean earthquake, three earthquakes struck Chile which were a fore-shock to the main event and are known as the 1960 Concepción earthquakes. The first Concepción earthquake struck at 06:02 on 21st May 1960. It had a magnitude of 8.1, lasted for 35 seconds and destroyed a third of the buildings in the city of Concepción in Chile. The second earthquake happened the following day at 06:32 and had a magnitude of 6.8, while the third was sized 7.9 and happened at 14:55 on 22nd May, just 15 minutes before the Valdivia earthquake.
Map of Chile
#4 THE EPICENTER OF THE 1960 VALDIVIA EARTHQUAKE WAS NEAR LUMACO
The epicenter of the 1960 Valdivia earthquake lay around 100 miles of the coast of Chile in the Pacific Ocean. It was near the town of Lumaco, which is around 570 km south of the Chilean capital Santiago. The focus of the earthquake was relatively shallow at 33 km, as other earthquakes in the region are known to reach depths of 70 km. Focus is the point where an earthquake originates while epicenter is the point on Earth’s surface directly above the focus.
Map of the Chile Earthquake of 1960
#5 THE MOST AFFECTED CITY IN THE EARTHQUAKE WAS VALDIVIA
The Great Chilean earthquake occurred at 15:11 on 22nd May 1960. It lasted for 11 to 13 minutes and affected all of Chile between Talca and Chiloé Island, an area more than 400,000 square kilometers. The towns of Valdivia and Puerto Montt suffered the maximum damage. The electricity and water systems of Valdivia were totally destroyed; and around 20,000 people were left homeless due to the destruction of approximately 40 percent of the houses in the town. The port of Puerto Montt collapsed. The train station and numerous buildings in the town were destroyed.
A Valdivia street after the 1960 earthquake
#6 IT CAUSED TSUNAMIS WHICH WERE PRIMARILY RESPONSIBLE FOR THE DAMAGE
Though the tremor caused significant damage, it was the resulting tsunamis or high waves which were responsible for major destruction. The tsunami affected southern Chile, Hawaii, Japan, the Philippines, eastern New Zealand, southeast Australia and the Aleutian Islands. Waves as high as 10.7 meters (35 ft) were recorded 10,000 kilometers from the epicenter, and as far away as Japan and the Philippines. 2 million people were left homeless due to the destruction caused by the 1960 Valdivia earthquake and the resultant landslides and tsunamis.
#7 VALDIVIA EARTHQUAKE TSUNAMI HIT JAPAN AROUND A DAY LATER
The Chilean coast was severely battered by the localized tsunamis which reached a height up to 25 m (82 ft) and caused numerous fatalities. The main tsunami raced across the Pacific Ocean and when it reached Hawaii, the waves still had a max height of 10.6 m (35 ft). They killed 61 people and caused $24 million in damage at Hilo Bay on the main island of Hawaii. Waves, as high as 5.5 m (18 ft), struck the Japanese island of Honshu around 22 hours after the earthquake, killing 138 people and destroying 1600 homes. Another 32 people were dead or missing in the Philippines after the tsunami hit those islands.
A wave of the tsunami of Great Chilean earthquake pours into Onagawa, Japan
#8 WAVES SET OFF BY IT BOUNCED BACK AND FORTH ACROSS THE PACIFIC FOR A WEEK
The waves set off by the Great Chilean earthquake bounced back and forth across the Pacific Ocean for a week. Aftershocks were recorded for around a month after the main tremor. On 24 May, 38 hours after the 1960 Valdivia earthquake, the Cordón-Caulle volcano in Los Lagos region of Chile erupted after nearly 40 years of inactivity. Some seismologists believe the eruption was linked to the Valdivia earthquake.
Map of the travel time of the 1960 Valdivia tsunami across the Pacific
#9 IT CAUSED 1,000 TO 6,000 DEATHS
The death toll and the monetary losses caused by the earthquake cannot be stated with certainty. The monetary cost of the disaster has been estimated in the range of US$400 million to US$800 million (2.9 to 5.8 billion in 2011, after adjusting for inflation). Studies have estimated the total number of fatalities at 1,655, 2231, 3000, 5700 and even 6,000. The death toll for the 1960 Valdivia earthquake was less considering its magnitude. This was because it occurred in the afternoon, many of the structures in affected areas were earthquake-resistant and the fore-shocks made the population wary.
Eruption of Cordon Caulle following the 1960 Valdivia earthquake
#10 IT IS THE MOST POWERFUL EARTHQUAKE EVER RECORDED
The moment magnitude scale (MW) is used to measure the size of earthquakes in terms of energy released. It is the most commonly used scale to measure medium to large earthquake magnitudes. An increase of two points on the scale corresponds to 1000 times increase in energy. Thus an MW 7.0 quake contains 1000 times the energy of a 5.0 MW quake and about 32 times of a 6.0 MW quake. Great Chilean earthquake measured 9.5 on the MW making it the largest earthquake of the twentieth Century and the most powerful ever recorded. Among recorded earthquakes, it is followed by 9.2MW 1964 Alaska earthquake and (9.1–9.3) MW Indian Ocean earthquake of 2004.
Chart of the most powerful earthquakes recorded in history
THE 2010 CHILE EARTHQUAKE
Chile was struck again by a powerful earthquake of 8.8MW in 2010. It was the fifth largest earthquake ever to be recorded by a seismograph and ranks thirteen in the overall list including estimates. The 2010 quake was also megathrust and caused by the subducting Nazca plate under the South American plate. It damaged 370,000 homes in Chile and at least 525 people were killed. The cost of the earthquake was estimated between 4 and 7 billion dollars.
On Friday, November 22, 1963, John F. Kennedy, the 35th president of the United States, was assassinated at 12:30 p.m. CST in Dallas, Texas, while riding in a presidential motorcade through Dealey Plaza.
A few things that need to be addressed about that day in November:
The caravan took an unexpected route. Why? Did the President know there was a hit on him? What caused him to think that a detour was needed without telling the Secret Service about it.*
If the driver really did shoot the President, then why? Why there? Why not someplace less public? If the driver did shoot the President, then I would say that “THAT IS THE COVER-UP” Maybe Kennedy didn’t get shot at all, but a body double instead.**
Who was behind the shooting? If it was a Secret Society, then which one? Who benefitted from the shooting? Why make it so public?***
* If Kennedy felt his life was in danger then it would make sense to make a last minute change of route. Why not tell the Secret Service about the change ahead of time? The best answer to both of these questions is that LBJ was behind the assassination. He not only benefitted from the shooting by being appointed POTUS, but his “friends” back in Texas benefitted by the change in Oil regulations.
** Why do the shooting in front of the world? Why not hide the assassination behind Kennedy’s Addison’s Disease?
In the end nobody has come up with anyone behind the shooting that makes a lot of sense. Sure we can all point to some Secret Society, but with one? Isn’t the Kennedy Family one of the Power Elites, or were they just an old school power family that wouldn’t set aside to the new ruling class. Old money vs. new money. Guess who won?
PRINCE WILLIAM SOUND, ALASKA, 1964
Though this earthquake was one of the biggest to ever be recorded, it caused relatively little damage of life and property since it occurred in the remote region of Alaska. This earthquake too triggered a tsunami and caused the deaths of 128 people and damaged property worth $311 million. This earthquake occurred along the North American and Pacific Plates and the tremors caused significant damage 120 kilometers to the northwest, in the town of Anchorage.
The Club of Rome is a nonprofit, informal organization of intellectuals and business leaders whose goal is a critical discussion of pressing global issues. The Club of Rome was founded in 1968 at Accademia dei Lincei in Rome, Italy. It consists of one hundred full members selected from current and former heads of
In 1968, the Club of Rome determined the limits to growth; the results of the study was that civilization as we know it would collapse shortly after the year 2000 unless the population was seriously reduced. Several top secret recommendations were made to the ruling elite by Dr. Aurelio Peccei of the Club of Rome. The chief recommendation was to develop a microbe which would attack the autoimmune system and thus render the development of a vaccine impossible. Dr. Peccei had come to the conclusion that a plague similar to that of the Black Death was needed in order to “rid” certain groups from the population. The orders were given to develop the microbe and to also develop a cure and a prophylactic. The microbe would be used against the general population and would be introduced by vaccine administered by the World Health Organization. The prophylactic was to be used by the ruling elite. The cure will be administered to the survivors when they decided that enough people have died and it would be announced as “newly developed.
This plan was called Global 2000. Funding was obtained from the U.S. Congress under H.B. 15090 (1969) and given to the Department of Defense 1970 budget to produce “a synthetic biological agent, an agent that does not naturally exist and for which no natural immunity could have been acquired.” Virologists refer to this as “Gain-of-Function” research that alters an organism or disease in a way that increases pathogenesis, transmissibility, or host range (the types of hosts that a microorganism can infect). The bottom line is that these bioweapons would be resistant to the immunological processes that we depend on to maintain immunity; they would be more infectious and lethal. This was the beginning of the “Gain-of-Function Experimentations“… and they never stopped!
The project was carried out at Fort Detrick, Maryland. Since large populations were to be decimated, the ruling elite decided to target the “undesirable elements of society” for extermination. Initially they targeted the Black, Hispanic, and homosexual populations and currently they are targeting the rest of society. The name of the project that developed the HIV virus that causes AIDS is MKNAOMI. The African continent was infected with HIV via Smallpox vaccine in 1977. Members of the gay community in the U.S. population were infected with HIV in 1978 with the Hepatitis B vaccine through the Centers for Disease Control and now the rest of mankind is being infected through the COVID-19 Vaccine! Vaccines armed with HIV autoimmune disease causing agents are the NWO’s “modus operandi” for mass depopulation!
In 1962 Roy Ash, Henry Kissinger, and Vice President Nelson Rockefeller, all members of the infamous Club of Rome, were financial investors in a company called Litton Bionetics. They helped to finance and establish the National Cancer Institute (NCI), which was part of the National Institute of Health (NIH) at Fort Detrick which included Litton Bionetics administration at the facility.
By 1968, Dr. Robert Gallo was NCI project officer for Bionetics where new cancer viruses that were functionally identical to HIV were created. And it was Bionetics that directly supplied Merck Pharmaceuticals with the HIV bioweapon used to formulate both the Smallpox and Hepatitis B vaccine. Although all existing records of Project MKNAOMI were destroyed, it’s clear that the scientists of the illuminati at Fort Detrick declared their own secret war against mankind and made unilateral decisions to eliminate 90 percent of the world’s population without our knowledge or consent. These are your Judges, Jurors and Executioners whose decision was final and without an option to appeal; Narcissistic Satanic “White Coats” catering to their NWO Masters! vaccines.
In 1991, the Club of Rome, a group of ‘elite’ individuals (think tank for the Vatican?), published a book called ‘The First Global Revolution.’ In that book, they admitted to inventing the climate change agenda as a ‘common enemy’ of mankind, in order to unite the world. Take a look at the following statement from the 1991 book: “In searching for a common enemy against whom we can unite, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like, would fit the bill. In their totality and their interactions these phenomena do constitute a common threat which must be confronted by everyone together.” (The First Global Revolution – A Report by the Council of the Club of Rome, p.75)
As you can see from the above quote, the Club of Rome admitted to ‘coming up with the idea’ that ‘global warming’, (now called ‘climate change’ because facts prove the earth isn’t warming as they said it would) could be used as a ‘common enemy’ to unite the world together. As stated in the book, they clearly had a plan to try and unite the world and a ‘common enemy’ would be needed to fulfill this plan:
“It would seem that men and women need a common motivation, namely a common adversary against whom they can organize themselves and act together.” (The First Global Revolution – A Report by the Council of the Club of Rome, p.70)
Yes, there are sources that suggest a connection between the **Club of Rome** and the **Vatican**. For example, in 1991, the Club of Rome published a book called ‘The First Global Revolution’ where they admitted to inventing the climate change agenda as a ‘common enemy’ of mankind, in order to unite the world³.
The Club of Rome is a nonprofit, informal organization of intellectuals and business leaders whose goal is a critical discussion of pressing global issues. The Club of Rome was founded in 1968 at Accademia dei Lincei in Rome, Italy. It consists of one hundred full members selected from current and former heads of state and government, UN administrators, high-level politicians and government officials, diplomats, scientists, economists, and business leaders from around the globe. It stimulated considerable public attention in 1972 with the first report to the Club of Rome, The Limits to Growth. Since 1 July 2008, the organization has been based in Winterthur, Switzerland.
The Club of Rome consists of one hundred full members selected from current and former heads of state and government, UN administrators, high-level politicians and government officials, diplomats, scientists, economists, and business leaders from around the globe¹. Some notable members include **Alexander King**, **Anders Wijkman**, **Ashok Khosla**, **Aurelio Peccei**, **Bas de Leeuw**, **Bohdan Hawrylyshyn** ¹. The club also has honorary members. Notable honorary members include **Princess Beatrix of the Netherlands**, **Orio Giarini**, **Fernando Henrique Cardoso**, **Mikhail Gorbachev**, **King Juan Carlos I of Spain**, **Horst Köhler**, and **Manmohan Singh**³.
The Club of Rome is a platform of diverse thought leaders who identify holistic solutions to complex global issues and promote policy initiatives and action to enable humanity to emerge from multiple planetary emergencies.
The organisation has prioritised five key areas of impact: Emerging New Civilisations; Planetary Emergency; Reframing Economics; Rethinking Finance; and Youth Leadership and Intergenerational Dialogues.
There are a number of ‘common enemies’ today that the Papacy is uniting the world against. The climage agenda is the main one, but other common enemies like ‘extremism’ and ‘terrorism’ are being used to unite the world together, and thus another ‘Tower of Babel’ is being erected to ‘secure’ mankind against these common enemies and unite the world under a ‘common cause.’ But this will end up failing, just as the original tower failed.
Apollo 11 (July 16–24, 1969) was the American spaceflight that first landed humans on the Moon. Commander Neil Armstrong and lunar module pilot Buzz Aldrin landed the Apollo Lunar Module Eagle on July 20, 1969.
Astronauts Neil Armstrong and Buzz Aldrin took the first steps on the moon, or did they?
One of the most notorious conspiracy theories is that the six Apollo moon landings, which took place between 1969 and 1972, were hoaxes, and that the photographs released by NASA documenting them were faked. This idea was first given substantial form by the former US Navy midshipman Bill Kaysing, who self-published a book outlining his theory, We Never Went to the Moon: America’s Thirty Billion Dollar Swindle, in 1976.
Those who believe in the moon-landing hoax argue that NASA’s motivation for hoodwinking the world was to create the impression that the Americans had won the space race against the Russians, when, in fact, they hadn’t.
“Kaysing was a well-educated man, and he believed his theory and followed it through in a very analytical way,” MacDonald says. “When you read his concerns, they seem quite coherent, which is why people believe them.”
Kaysing and his followers, many of whom continue to question the moon landings today, focus on supposed peculiarities of the NASA photographs, which are freely available on the agency’s website.
“It’s the same with UFO conspiracy theories,” MacDonald says. “Often, there is an idea that governments are hiding something: look at Area 51. There’s always an ‘us and them’ angle with conspiracy theories (otherwise who would be conspiring?) – and usually it’s governments who are the ‘them’.”
A Moon landing or lunar landing is the arrival of a spacecraft on the surface of the Moon. This includes both crewed and robotic missions. The first human-made object to touch the Moon was the Soviet …
During the sixties, Arthur got rich marketing the tranquillizers Librium and Valium. One Librium ad depicted a young woman carrying an armload of books, and suggested that even the quotidian anxiety a college freshman feels upon leaving home might be best handled with tranquillizers. Such students “may be afflicted by a sense of lost identity,” the copy read, adding that university life presented “a whole new world . . . of anxiety.” The ad ran in a medical journal. Sackler promoted Valium for such a wide range of uses that, in 1965, a physician writing in the journal Psychosomatics asked, “When do we not use this drug?”
And what’s fascinating, dating back to the early days, they made their first big fortune marketing Valium in the 1960s. And dating back to the 1950s, 1960s, there was this tendency in this family to do philanthropic giving where you see your name on universities and art museums, but to always obscure the source of the family wealth.”
Arthur Sackler refined the art of wooing physicians with direct appeals, enticing them with lucrative speaker fees, dinners and trips. In exchange, doctors used the medications sold by companies for which he worked. His efforts helped turn the tranquilizer Valium into a bestseller.
According to Keefe, it was a “template” followed and expanded by his heirs after the Sacklers transformed Purdue Pharma into an opioid juggernaut.
Drug that steals women’s lives: It’s more addictive than heroin, with horrifying side-effects. So why, 50 years after its launch, is Valium still given to millions?
How the Sackler family built a pharma dynasty and fueled an American calamity
The Family That Built an Empire of Pain
In his interview with Vanity Fair, David Sackler lamented the day that his 4-year old son returned from nursery school and asked,
Why are my friends telling me that our family’s work is killing people?”
Sackler family erased suicide of drug-addled heir, new book reveals
On the morning of July 5, 1975, a deeply troubled Robert Mortimer Sackler somehow made his way from his apartment on East 64th Street to his mother’s home on East 86th Street.
Bobby, as he was known to his family, had just turned 24 years old and was one of the heirs to the Sackler drug empire, a private, family-run business that was then on its way to becoming a multibillion dollar concern with its focus on developing and marketing powerful painkillers. His father, Mortimer, a medical doctor from Brooklyn, had bought the small Manhattan company — known for its laxatives and ear wax-remover — in 1952. Mortimer’s younger brother Raymond and eldest brother Arthur also held an interest in the firm that would become known as the drug giant Purdue Pharma.
Bobby grew up with older sisters Kathe and Ilene at a sprawling home in Great Neck, Long Island, but moved with his mother to the Upper East Side when he was 15 and his parents divorced. But long before the split from his wife, Mortimer was already establishing a pattern as an absentee dad, preferring to spend much of his time poolside in the South of France, playing tennis and sipping cocktails. He had started an affair with a much younger woman while his wife Muriel raised their children on Long Island.
By the time he was in his early 20s, Bobby had already been in and out of psychiatric facilities, and was a full-blown drug addict, using heroin and PCP or “angel dust” on a daily basis, according to “Empire of Pain: The Secret History of the Sackler Dynasty,” by Patrick Radden Keefe.
“He was a little cuckoo,” said Ceferino Perez, a longtime doorman in the white-glove postwar building on 11 East 86th Street where Muriel lived in a two-bedroom on the ninth floor. “He was the kind of guy that nobody was going to hire.”
Unemployed, Bobby lived alone with his pet cats in a one-bedroom on East 64th Street in a luxury building owned by his father. When he was away for frequent stays at rehab facilities, a housekeeper that Muriel employed for three decades took care of his cats, according to Radden Keefe.
“He was crazy,” said a family friend, describing how Bobby had once been found wandering nude in Central Park. “Totally out of control.”
When he arrived in the lobby of his mother’s building on that humid Saturday morning, Bobby fought with the elevator operator, according to Radden Keefe. He barged into his mother’s apartment where he could be heard arguing and demanding money. Moments later, he broke a window and plunged to his death.
Bobby Sackler’s tragic story has been buried for more than 40 years. There are no accounts of his suicide in newspapers and no public photographs of the young heir. His drug-addled life was an inconvenient truth and a huge embarrassment for a family who sold drugs. The Sacklers were forging an empire built on highly-addictive pain killers and wanted to be known for their generous philanthropy to the arts and to universities around the world, writes Radden Keefe, who managed to track down some of the witnesses to the 1975 suicide and tells Bobby’s story for the first time in his book, which will be released April 13.
The Sacklers came under fire two years ago for their role in the opioid crisis, which has killed more than 450,000 people in the US alone. Purdue Pharma began marketing the powerful painkiller OxyContin in 1996, misleading the public about the dangers of the highly addictive narcotic, according to court papers. Last year the company pled guilty to criminal charges related to its marketing of OxyContin and the family agreed to pay $225 million in civil fines. Purdue Pharma faced penalties of $8.3 billion to settle some of the myriad lawsuits against them, although it’s unlikely to pay anywhere near that amount since the company filed for bankruptcy protection.
Since the revelation of the OxyContin scourge, museums around the world, including the Metropolitan Museum in New York, the Louvre in Paris, the Tate Modern and the National Portrait Gallery in London, have distanced themselves from the Sackler dynasty, whose philanthropy and position in high society was being carefully molded in the 1950s and 1960s while Bobby was struggling with his issues.
Yet the three Brooklyn-born brothers who founded the Sackler empire were in a perfect position to help Bobby when he was in the throes of his illness.
From the time they were children, the brothers were encouraged by their father, Isaac Sackler, a Jewish immigrant grocer, to become doctors. Arthur Sackler led the way, graduating from Erasmus Hall High School in Brooklyn and financing his studies at New York University by working for a drug-marketing company that helped launch tranquilizers such as Librium and Valium. Arthur encouraged his younger brothers Mortimer and Raymond to follow in his footsteps and go to medical school and even brought them in to the infamous Creedmoor State Hospital in Queens, a psychiatric facility, where he began a residency in psychiatry in 1944. At the time, the hospital was described as a “six thousand bed jail” where patients were regularly subjected to brutal electroshock treatments and lobotomies.
“Among them, the brothers conducted the [electroshock] procedure thousands of times, an experience they came to find demoralizing,” writes Radden Keefe. The brothers decided to work on alternative methods to help patients, and after experimenting with electroshock therapy on a rabbit, discovered that they could help bi-polar and schizophrenic patients by giving them doses of histamine. The drug treatments were so successful that doctors at Creedmoor were able to move away from the more invasive procedures of the past.
Guided by Arthur, who had become wealthy through drug marketing and running scientific journals, the brothers took over tiny Purdue Frederick in the early 1950s. By 1983, the Sacklers moved the company, now named Purdue Pharma and producing an arthritis medication, to Norwalk, Conn. It expanded again and moved its corporate headquarters in Stamford in 2001.
As the three brothers amassed their fortunes, they began devoting time to their philanthropy. Arthur, who had started collecting art while still at NYU, was instrumental in helping the Metropolitan Museum obtain the Temple of Dendur from Egypt in 1967 by offering, along with his brothers, to finance the $3.5 million construction of a special wing of the museum to house the circa-15 B.C. sandstone structures. Arthur had developed a close relationship to Met directors over the years, and even managed to secure a private “enclave” at the museum where he stored some of his vast collection of Chinese antiquities.
The plans for construction on the Sackler wing at the Met would begin around the same time that Mortimer turned 50 and was launching what he called his “new life,” which coincided with the downward spiral of his first-born son, who lived mostly with his mother in New York City although he would sometimes travel to France to vacation with his father.
“The Cote D’Azur this year is not as mobbed,” wrote Mortimer to a friend in the summer of 1966. “There has been, as usual, a change in the places that are in and those not in. There has been a new crop of bikini girls, and the leftovers of the last few crops.”
A pharmacist holds a bottle OxyContin made by Purdue Pharma at a pharmacy in Provo, Utah on May 9, 2019.
Among the new crop was Gertraud “Geri” Wimmer, a “statuesque” Austrian who was 20 years old, the same age as his eldest daughter Ilene. After his divorce from Muriel, Mortimer married Wimmer in 1969. He renounced his US citizenship, for tax reasons, and the couple lived among homes in Paris, New York, the Swiss Alps and a sprawling seaside villa in the Cap d’Antibes. The couple had two children — Samantha and Mortimer David — before divorcing a decade later. In 1980, Mortimer married his third wife, Theresa Rowling, an English Catholic school teacher who was 31 years old. Mortimer was then 64, but went on to have three more children — Marissa, Sophie and Michael — with Theresa. Both Mortimer and Theresa would later be recognized by the Queen for their philanthropy in England.
Kathe Sackler, former Vice President and member of the Board of Directors of Purdue Pharma, is sworn in to testify by video link during an entirely virtual hearing of the U.S. House Oversight Committee on “The Role of Purdue Pharma and the Sackler Family in the Opioid Epidemic,” on Dec. 17, 2020.
Mortimer died in 2010, after making billions on OxyContin but well before the onslaught of lawsuits and probes that would leave the Sacklers’ reputation in tatters around the world. In the end, it was his widow Theresa and daughters Kathe and Ilene, along with five other members of the Sackler clan who served on the Purdue Pharma board, who were forced into a reckoning of the company’s decision to market a drug they knew to be highly addictive. Kathe was herself a medical doctor, although she never practiced medicine. According to internal family emails included in court filings, Kathe, now 72, took credit for the family’s decision to introduce OxyContin.
Kathe and Ilene probably understood the dangers of addiction better than any surviving members of the Sackler clan after the terrible death of their brother Bobby.
During a 2019 deposition in a New York City boardroom, Kathe seemed to recall that faraway day in July 1975 when she made what might have seemed an offhand remark about the heroin crisis of the 1970s: “I have friends. Relatives I mean. I know people, individual people who have suffered. It touches everyone’s life. It’s horrible.”
Bobby’s death was certainly horrible. He died instantly upon hitting the pavement.
Perez, the doorman, heard the sound of breaking glass.
“Then a much louder, closer sound as something heavy landed on the sidewalk. The impact was so intense that it sounded like a car crash,” writes Radden Keefe. “But when Perez looked over, he saw that there was a body on the sidewalk. It was Bobby Sackler. He had fallen nine stories. His head had cracked open on the pavement.”
A distraught Muriel Sackler called down to the front desk. “My son jumped out the window,” she said. “He broke the window with a chair. Do you think he’s dead?”
A shocked Perez confirmed that Bobby was indeed dead.
Mortimer’s New York family was distraught by Bobby’s death. But their grief seemed to turn quickly to embarrassment.
The tiny funeral announcement in the New York Times on July 9 said only that he had died “suddenly in the 24th year of life.” A service was held at the Riverside Chapel with donations suggested for a performance arts space on 11th Street in Manhattan.
While Mortimer was said to be broken up about his son’s death, he did almost nothing to preserve his memory. The family eventually established the Robert Sackler scholarship at Tel Aviv University “but there was never any explanation with this endowment of who Robert Sackler had been in life,” writes Radden Keefe. “It was a strange paradox: the Sackler family had put their name everywhere. But when a member of the family died young, they did not commemorate him in any public fashion.”
THE 1970 BHOLA CYCLONE
This tropical cyclone hit what is now Bangladesh (then East Pakistan) on Nov. 12-13, 1970. According to NOAA’s Hurricane Research Division, the storm’s strongest wind speeds measured 130 mph (205 kph), making it the equivalent of a Category 4 major hurricane on the Saffir-Simpson Hurricane scale. Ahead of its landfall, a 35-foot (10.6 m) storm surge washed over the low-lying islands bordering the Bay of Bengal, causing widespread flooding.
The storm surge, combined with a lack of evacuation, resulted in a massive death toll estimated at 300,000 to 500,000 people. A 1971 report from the National Hurricane Center and the Pakistan Meteorological Department acknowledged the challenge of accurately estimating the death toll, especially due to the influx of seasonal workers who were in the area for the rice harvest
A Trip to the Moon (or Le voyage dans la lune, originally) isn’t about the moon landing; it came out in 1902, decades before NASA was founded in the late ’50s. But Georges Méliès’s seminal film was a pioneering work of its own. It’s considered one of the earliest science fiction movies, inspired partly by stories from writers like Jules Verne.
“A Trip to the Moon” is a 1902 French adventure short film directed by Georges Méliès. It follows a group of astronomers who travel to the Moon in a cannon-propelled capsule, explore the Moon’s surface, escape from an underground group of Selenites (lunar inhabitants), and return to Earth with a captive Selenite.
The film was screened at Méliès’s Théâtre Robert-Houdin in Paris from September through December 1902.
It is considered one of the earliest examples of science fiction films and one of the most influential films in cinema history .