I have been working on a major timeline to look for major causes of death via “natural” causes like earthquakes, tsunamis fires, floods, dams which are Man-Made, all natural disasters are actually created by them to harm us
The dates are in order, I am in the process of editing.
British tea company flag, looks just like USA flag – Who designs a flag to look exactly like your so-called enemy?
The flags of the Confederate States of America have a history of three successive designs during the American Civil War. The flags were known as the “Stars and Bars”, used from 1861 to 1863; the “Stainless Banner”, used from 1863 to 1865; and the “Blood-Stained Banner”, used in 1865 shortly before the Confederacy‘s dissolution. A rejected national flag design was also used as a battle flag by the Confederate Army and featured in the “Stainless Banner” and “Blood-Stained Banner” designs. Although this design was never a national flag, it is the most commonly recognized symbol of the Confederacy.
Since the end of the Civil War, private and official use of the Confederate flags, particularly the battle flag, has continued amid philosophical, political, cultural, and racial controversy in the United States. These include flags displayed in states; cities, towns and counties; schools, colleges and universities; private organizations and associations; and individuals. The battle flag was also featured in the state flags of Georgia and Mississippi, although it was removed by the former in 2003 and the latter in 2020. After the former was changed in 2001, the city of Trenton, Georgia has used a flag design nearly identical to the previous version with the battle flag.
The first flag resembling the modern stars and stripes was an unofficial flag sometimes called the “Grand Union Flag“, or “the Continental Colors.” It consisted of 13 red-and-white stripes, with the British Jack in the upper left-hand-corner. It first appeared on December 3, 1775, when Continental Navy Lieutenant John Paul Jones flew it aboard Captain Esek Hopkin’s flagship Alfred in the Delaware River.[citation needed] It remained the national flag until June 14, 1777.[6] At the time of the Declaration of Independence in July 1776, the Continental Congress would not legally adopt flags with “stars, white in a blue field” for another year. The “Grand Union Flag” has historically been referred to as the first national flag of the United States. [7]
The Continental Navy raised the Colors as the ensign of the fledgling nation in the American War for Independence – likely with the expedient of transforming their previous British red ensign by adding white stripes.[7][8] The name “Grand Union” was first applied to the Continental Colors by George Henry Preble in his 1872 book known as History of the American Flag.[8]
The flag closely resembles the flag of the British East India Company during that era, and Sir Charles Fawcett argued in 1937 that the company flag inspired the design of the U.S. flag. Both flags could have been easily constructed by adding white stripes to a British Red Ensign, one of the three maritime flags used throughout the British Empire at the time. However, an East India Company flag could have from nine to 13 stripes and was not allowed to be flown outside the Indian Ocean.[10] Benjamin Franklin once gave a speech endorsing the adoption of the company’s flag by the United States as their national flag. He said to George Washington, “While the field of your flag must be new in the details of its design, it need not be entirely new in its elements. There is already in use a flag, I refer to the flag of the East India Company.” This was a way of symbolizing American loyalty to the Crown as well as the United States’ aspirations to be self-governing, as was the East India Company. Some colonists also felt that the company could be a powerful ally in the American War of Independence, as they shared similar aims and grievances against the British government’s tax policies. Colonists, therefore, flew the company’s flag to endorse the company.[12]
However, the theory that the Grand Union Flag was a direct descendant of the flag of the East India Company has been criticized as lacking written evidence.[13] On the other hand, the resemblance is obvious, and some of the Founding Fathers of the United States were aware of the East India Company’s activities and of their free administration of India under Company rule.
PAN
In ancient Greek religion and mythology, Pan is the god of the wild, shepherds and flocks, rustic music and impromptus, and companion of the nymphs. He has the hindquarters, legs, and horns of a goat, in the same manner as a faun or satyr.
The word “pan” means “all” in Latin, Pan became a kind of universal god, a representative of pantheism. Pan was also associated with panic because he had a nasty habit of suddenly appearing out of nowhere, shouting, and frightening people away.
Today the word pan makes us think of the pandemic that suddenly appeared, making many people panic.
The term “pandemic,” from the Greek pandēmos, meaning “all people,” traces its origins along with the word “panic” to the Greek nature god, Pan. This horned and hooved creature was born to Hermes and one of the many nymphs that he loved.
Upon seeing the wild newborn, Pan’s mother fled in terror leaving Hermes to care for his half-child, half-goat son. Hermes introduced the boy to his fellow Olympians who found the curious child pleasing. Carl Kerényi explains that the common attributes associated with Pan – “dark, terror-awakening, phallic” – do not cover the range of possibilities within this god; in fact, the dark Pan may have had a twin. Pan himself may have been just one side of a more complex divine male couple.
Pull up a Chair;)
Nobel’s Path to Dynamite and Wealth
One of Nobel’s tutors was the accomplished Russian organic chemist Nikolai Zinin, who first told him about nitroglycerine, the explosive chemical in dynamite.
Though Nobel was interested in poetry and literature, his father wanted him to become an engineer, and in 1850, he sent him to Paris to study chemical engineering. Though he never obtained a degree or attended the university, Nobel worked in the Royal College of Chemistry laboratory of Professor Jules Pélouze.
It was there that Nobel was introduced to Professor Pélouze’s assistant, Italian chemist Ascanio Sobrero, who had invented nitroglycerin in 1847. Though the explosive power of the chemical was much greater than that of gunpowder, it tended to explode unpredictably when subjected to heat or pressure and could not be handled with any degree of safety. As a result, it was rarely used outside the laboratory.
The Nobel brothers (clockwise) Robert, Alfred 9 yrs of age, Ludvig and baby Emil. St. Petersburg, around 1843. Source: nobelprize.org.
His experiences with Pélouze and Sobrero in Paris inspired Nobel to look for a way to make nitroglycerin a safe and commercially usable explosive.
In 1851, at age 18, Nobel spent a year in the United States studying and working under Swedish-American inventor John Ericsson, designer of the American Civil War ironclad warship USS Monitor.
There is no Nobel Prize for Mathematics because Alfred Nobel, the founder of the Nobel Prizes, did not include it in his will. There are many speculations as to why he did not include mathematics as a category. One of the more credible reasons is that he simply didn’t care much for mathematics and that it was not considered a practical science from which humanity could benefit, which was a chief purpose for creating the Nobel Foundation¹⁴.
However, there are other prestigious awards for mathematicians such as the Fields Medal and the Abel Prize¹.
Alfred Nobel was a Swedish chemist, engineer, inventor, businessman, and philanthropist. He was born on October 21, 1833 in Stockholm, Sweden and died on December 10, 1896 in San Remo, Italy. He was the third son of Immanuel Nobel, an inventor and engineer, and Karolina Andriette Nobel¹.
Nobel is best known for having bequeathed his fortune to establish the Nobel Prize. However, he also made several important contributions to science during his lifetime and held 355 patents. His most famous invention was dynamite, a safer and easier means of harnessing the explosive power of nitroglycerin; it was patented in 1867.
Nobel displayed an early aptitude for science and learning, particularly in chemistry and languages; he became fluent in six languages and filed his first patent at the age of 24. He embarked on many business ventures with his family, most notably owning the company Bofors which was an iron and steel producer that he had developed into a major manufacturer of cannons and other armaments.
Let’s first mention that the popular myth that Nobel decided not to fund a prize for mathematicians because his wife was cheating on him with a mathematician (often said to be Gösta Mittag-Leffler) is (predictably) not true. In fact, it’s trivially false since Nobel was never married! Furthermore, in the correspondence between him and his lover, there is no sign of anything like an affair.
It appears that the reason there is no Nobel Prize in Mathematics is much more dull: Nobel probably simply wasn’t that interested in pure mathematics. The other categories are more natural, considering the context: Nobel was personally very much involved in physics and chemistry, so prizes for that were beyond questioning.
It was also clear to him that medicine enormously benefits mankind (this is almost the definition of medicine, when you think about it!) It appears that the peace prize was suggested by his secretary and old lover, who would go on to win the prize in 1905. Finally, the prize in literature seems to simply have come from the fact that Nobel was very interested in literature.
There is an alternative theory, however. At the time, the same Mittag-Leffler that is often accused of stealing Nobel’s wife had just recently ensured for King Oscar II to create an ‘endowment prize for various mathematicians throughout Europe’. Perhaps, this convinced Nobel that no additional prize for mathematicians was needed.
Alfred Nobel made his fortune primarily through the invention of dynamite. Upon his death in 1896, he left the bulk of his assets to an endowment to invest in “safe securities”. His will stated that the interest from this endowment should be awarded annually as prizes to those who “conferred the greatest benefit to humankind”. This is how the Nobel Prizes were established.
Nitroglycerin was later adopted as a commercially useful explosive by Alfred Nobel, who experimented with safer ways to handle the dangerous compound after his younger brother, Emil Oskar Nobel, and several factory workers were killed in an explosion at the Nobles’ armaments factory in 1864 in Heleneborg, Sweden.
Alfred Nobel‘s patent application from 1864
One year later, Nobel founded Alfred Nobel and Company in Germany and built an isolated factory in the Krümmel hills of Geesthacht near Hamburg.
This business exported a liquid combination of nitroglycerin and gunpowder called “Blasting Oil”, but this was extremely unstable and difficult to handle, as evidenced in numerous catastrophes. The buildings of the Krümmel factory were destroyed twice.
In April 1866, several crates of nitroglycerin were shipped to California, three of which were destined for the Central Pacific Railroad, which planned to experiment with it as a blasting explosive to expedite the construction of the 1,659-foot-long (506 m) Summit Tunnel through the Sierra Nevada Mountains.
One of the remaining crates exploded, destroying a Wells Fargo company office in San Francisco and killing 15 people.
This led to a complete ban on the transportation of liquid nitroglycerin in California. The on-site manufacture of nitroglycerin was thus required for the remaining hard-rock drilling and blasting required for the completion of the First transcontinental railroad in North America.
When chemistry prize honored man who found use for DDT, which was later banned
The 1948 medicine prize to Swiss scientist Paul Müller honored a discovery that ended up doing both good and bad.
Mueller didn’t invent dichlorodiphenyltrichloroethane, or DDT, but he discovered that it was a powerful pesticide that could kill lots of flies, mosquitoes, and beetles in a short time.
The compound proved very effective in protecting agricultural crops and fighting insect-borne diseases like typhus and malaria. DDT saved hundreds of thousands of lives and helped eradicate malaria from southern Europe.
But in the 1960s environmentalists found that DDT was poisoning wildlife and the environment. The US banned DDT in 1972 and in 2001 it was banned by an international treaty, though exemptions are allowed for some countries fighting malaria.
When the man who invented lobotomy won the medicine prize
Carving up people’s brains may have seemed like a good idea at the time. But in hindsight, rewarding Portuguese scientist Antonio Egas Moniz in 1949 for inventing lobotomy to treat mental illness wasn’t the Nobel Prizes’ finest hour.
The method became very popular in the 1940s, and at the award ceremony it was praised as “one of the most important discoveries ever made in psychiatric therapy.”
But it had serious side effects: Some patients died and others were left severely brain damaged. Even operations that were considered successful left patients unresponsive and emotionally numb.
The method declined quickly in the 1950s as drugs to treat mental illness became widespread and it’s used very seldom today.
For more than 100 years, the Nobel Prizes have recognized the finest in human achievements, from literature and science to the Nobel Peace Prize, which is given “to the person who shall have done the most or the best work for fraternity between nations, the abolition or reduction of standing armies and for the holding and promotion of peace congresses,” according to the last will and testament of founder Alfred Nobel.
But the origins of the Nobel Prizes, and the life of Alfred Nobel, tell a very different story, one tainted by the deaths of untold thousands of people.
Alfred Bernhard Nobel was born in 1833 in Stockholm, Sweden. His father, Immanuel Nobel, was an inventor and engineer who struggled financially for much of his life.
Forced to declare bankruptcy, Immanuel left Sweden and began working in St. Petersburg, Russia, where he impressed the czar in the one of his inventions — submerged explosive mines that could thwart a naval invasion
Finally achieving a measure of success, Immanuel brought his wife and eight children to St. Petersburg. His sons were given a formal education, and Alfred shined under strict Russian tutelage, mastering several languages as well as chemistry, physics, poetry and natural sciences.
Because the elder Nobel disapproved of Alfred’s interest in poetry, he sent his son abroad to further his training in chemistry and engineering.
While studying in Paris, Nobel met Italian chemist Ascanio Sobrero, who in 1847 invented nitroglycerin, the oily, liquid explosive made by combining glycerin with nitric acid and sulfuric acid.
Innovation from tragedy
Though nitroglycerine was considered too unsafe to have any practical use, the Nobel family — which now had several profitable enterprises in Russia and Sweden — continued to investigate its potential for commercial and industrial uses.
But their inquiries had tragic results: In 1864, Alfred’s younger brother Emil and several other people were killed in an explosion at one of their factories in Sweden.
The disaster encouraged Alfred to try to find a way to make nitroglycerin safe. Success didn’t come easily: Early experiments included the creation of “blasting oil,” a mixture of nitro and gunpowder, which resulted in several deadly explosions and once killed 15 people when it exploded in a storeroom in San Francisco.
Finally, in 1867, Alfred Nobel found that by mixing nitroglycerin with diatomaceous earth (known as kieselguhr in German), the resulting compound was a stable paste that could be shaped into short sticks that mining companies might use to blast through rock.
Nobel patented this invention as “dynamite,” from the Greek word dunamis, or “power.”
The invention of dynamite revolutionized the mining, construction and demolition industries. Railroad companies could now safely blast through mountains, opening up vast stretches of the Earth’s surface to exploration and commerce.
As a result, Nobel — who eventually garnered 355 patents on his many inventions — grew fantastically wealthy.
Bertha von Suttner, Alfred Nobel’s good friend. 1905 Nobel Peace Prize Recipient. Her Peace Activities undoubtedly influenced Alfred Nobel to establish a prize for peace along with the prizes for science and literature. “Inform me, convince me, and then I will do something great for the movement.” Alfred Nobel said to Bertha von Suttner. Image source: nobelprize.org.
‘Merchant of death’
Dynamite, of course, had other uses, and it wasn’t long before military authorities began using it in warfare, including dynamite cannons used during the Spanish-American War. Though he’s widely credited with being a pacifist, it’s not known whether Nobel approved of dynamite’s military use or not. Nonetheless, he found out what others thought of his invention when, in 1888, his brother Ludvig died. Through some journalistic error, Alfred’s obituary was widely printed instead, and he was scorned for being the man who made millions through the deaths of others.
Once a French newspaper wrote “Le marchand de la mort est mort,” or “the merchant of death is dead. “The obituary went on to describe Nobel as a man “who became rich by finding ways to kill more people faster than ever before.”
Nobel was reportedly stunned by what he read, and as a result became determined to do something to improve his legacy.
Case for rocket powered by ballistite, designed by Alfred Nobel and W. T. Unge, 1896
One year before he died in 1896, Nobel signed his last will and testament, which set aside the majority of his vast estate to establish the five Nobel Prizes, including one awarded for the pursuit of peace.
Timeline
1347
The Black Death: A Timeline of the Gruesome Pandemic
One of the worst plagues in history arrived at Europe’s shores in 1347. Five years later, some 25 to 50 million people were dead.
Nearly 700 years after the Black Death swept through Europe, it still haunts the world as the worst-case scenario for an epidemic. Called the Great Mortality as it caused its devastation, this second great pandemic of Bubonic Plague became known as the Black Death in the late 17th Century.
Modern genetic analysis suggests that the Bubonic plague was caused by the bacterium Yersinia pestis or Y. pestis. Chief among its symptoms are painfully swollen lymph glands that form pus-filled boils called buboes. Sufferers also face fever, chills, headaches, shortness of breath, hemorrhaging, bloody sputum, vomiting and delirium, and if it goes untreated, a survival rate of 50 percent.
During the Black Death, three different forms of the plague manifested across Europe. Below is a timeline of its gruesome assault on humanity.
Black Death Emerges, Spreads via the Black Sea
FRESCO BY AN ANONYMOUS PAINTER DEPICTING ‘THE TRIUMPH OF DEATH.’ DEATH AS A SKELETON RIDES A SKELETAL HORSE AND PICKS OFF HIS VICTIMS.
1346
The strain of Y. pestis emerges in Mongolia, according to John Kelly’s account in The Great Mortality. It is possibly passed to humans by a tarabagan, a type of marmot. The deadliest outbreak is in the Mongol capital of Sarai, which the Mongols carry west to the Black Sea area.
Mongol King Janiberg and his army are in the nearby city of Tana when a brawl erupts between Italian merchants and a group of Muslims. Following the death of one of the Muslims, the Italians flee by sea to the Genoese outpost of Caffa and Janiberg follow on land. Upon arrival at Caffa, Janiberg’s army lays siege for a year but they are stricken with an outbreak. As the army catapults the infected bodies of their dead over city walls, the under-siege Genoese become infected also.
May, 1347
Both sides in the siege are decimated and survivors in Caffa escape by sea, leaving behind streets covered with corpses being fed on by feral animals. One ship arrives in Constantinople, which, once infected, loses as much as 90 percent of its population.
October, 1347
Another Caffan ship docks in Sicily, the crew barely alive. Here the plague kills half the population and moves to Messina. Fleeing residents then spread it to mainland Italy, where one-third of the population is dead by the following summer.
November, 1347
The plague arrives in France, brought by another of the Caffa ships docking in Marseille. It spreads quickly through the country.
A New Strain Enters Europe
THE PLAGUE IN TOURNAI, 1349.
January, 1348
A different plague strain enters Europe through Genoa, brought by another Caffan ship that docks there. The Genoans attack the ship and drive it away, but they are still infected. Italy faces this second strain while already battling the previous one.
- Pestisalso heads east from Sicily into the Persian Empire and through Greece, Bulgaria, Romania and Poland, and south to Egypt, as well as Cyprus, which is also hit with destruction from an earthquake and deadly tidal wave at the same time.
Venice faces its own outbreak by pioneering the first organized response, with committees ordering ship inspections and burning those with contagions, shutting down taverns, and restricting wine from unknown sources. The canals fill with gondolas shouting official instructions for disposing of dead bodies. Despite those efforts, the plague kills 60 percent of the Venetian population.
April, 1348
The plague awakes an anti-Semitic rage around Europe, causing repeated massacres of Jewish communities, with the first one taking place in Provence, where 40 Jews were murdered.
June, 1348
The plague enters England through the port of Melcombe Regis, in Dorset. As it spreads through the town, some escape by fleeing inland, inadvertently spreading it further.
Though it had been around for ages, leprosy grew into a pandemic in Europe in the Middle Ages. A slow-developing bacterial disease that causes sores and deformities, leprosy was believed to be a punishment from God that ran in families.
The Black Death haunts the world as the worst-case scenario for the speed of disease’s spread. It was the second pandemic caused by the bubonic plague, and ravaged Earth’s population. Called the Great Mortality as it caused its devastation, it became known as the Black Death in the late 17th Century.Read more: Social Distancing and Quarantine Were Used in Medieval Times to Fight the Black Death
In another devastating appearance, the bubonic plague led to the deaths of 20 percent of London’s population. The worst of the outbreak tapered off in the fall of 1666, around the same time as another destructive event—the Great Fire of London. Read more: When London Faced a Pandemic—And a Devastating Fire
The first of seven cholera pandemics over the next 150 years, this wave of the small intestine infection originated in Russia, where one million people died. Spreading through feces-infected water and food, the bacterium was passed along to British soldiers who brought it to India where millions more died. Read more: How 5 of History’s Worst Pandemics Finally Ended
The first significant flu pandemic started in Siberia and Kazakhstan, traveled to Moscow, and made its way into Finland and then Poland, where it moved into the rest of Europe. By the end of 1890, 360,000 had died.Read more: The Russian Flu of 1889: The Deadly Pandemic Few Americans Took Seriously
The avian-borne flu that resulted in 50 million deaths worldwide, the 1918 flu was first observed in Europe, the United States and parts of Asia before spreading around the world. At the time, there were no effective drugs or vaccines to treat this killer flu strain. Read more: How U.S. Cities Tried to Halt the Spread of the 1918 Spanish Flu
Starting in Hong Kong and spreading throughout China and then into the United States, the Asian flu became widespread in England where, over six months, 14,000 people died. A second wave followed in early 1958, causing about 1.1 million deaths globally, with 116,000 deaths in the United States alone.Read more: How the 1957 Flu Pandemic Was Stopped Early in Its Path
First identified in 1981, AIDS destroys a person’s immune system, resulting in eventual death by diseases that the body would usually fight off. AIDS was first observed in American gay communities but is believed to have developed from a chimpanzee virus from West Africa in the 1920s. Treatments have been developed to slow the progress of the disease, but 35 million people have died of AIDS since its discoveryRead more: The History of AIDS
First identified in 2003, Severe Acute Respiratory Syndrome is believed to have started with bats, spread to cats and then to humans in China, followed by 26 other countries, infecting 8,096 people, with 774 deaths.Read more: SARS Pandemic: How the Virus Spread Around the World in 2003
Summer, 1348
A group of religious zealots known as the Flagellants first begin to appear in Germany. These groups of anywhere from 50 to 500 hooded and half-naked men march, sing and thrash themselves with lashes until swollen and bloody. Originally the practice of 11th-century Italian monks during an epidemic, they spread out through Europe. Also known for their violent anti-Semitism, the Flagellants mysteriously disappear by 1350.
The plague hits Marseille, Paris and Normandy, and then the strain splits, with one strain moving onto the now-Belgian city of Tournai to the east and the other passing through Calais. and Avignon, where 50 percent of the population dies.
The plague also moves through Austria and Switzerland, where a fury of anti-Semitic massacres follow it along the Rhine after a rumor spreads that Jews had caused the plague by poisoning wells, as Jennifer Wright details in her book, Get Well Soon, History’s Worst Plagues and the Heroes Who Fought Them. In towns throughout Germany and France, Jewish communities are completely annihilated. In response, King Casimir III of Poland offers a safe haven to the persecuted Jews, starting a mass migration to Poland and Lithuania. Marseilles is also considered a safe haven for Jews.
Black Death Reaches London, Scotland and Beyond
FLAGELLANTS, KNOWN AS THE BROTHERS OF THE CROSS, SCOURGING THEMSELVES AS THEY WALK THROUGH THE STREETS IN ORDER TO FREE THE WORLD FROM THE BLACK DEATH, IN THE BELGIUM TOWN OF TOURNAI
October, 1348
Following the infection and death of King Edward III’s daughter Princess Joan, the plague reaches London, according to King Death: The Black Death and its Aftermath in Late-Medieval England by Colin Platt. As the devastation grows, Londoners flee to the countryside to find food. Edward blames the plague on garbage and human excrement piled up in London streets and in the Thames River.
February, 1349
One of the worst massacres of Jews during the Black Death takes place on Valentine’s Day in Strasbourg, with 2,000 Jewish people burned alive. In the spring, 3,000 Jews defend themselves in Mainz against Christians but are overcome and slaughtered.
April, 1349
The plague hits Wales, brought by people fleeing from Southern England, and eventually kills100,000 people there.
Vikings, Crippled by Plague, Halt Exploration
July, 1349
An English ship brings the Black Death to Norway when it runs aground in Bergen. The ship’s crew is dead by the end of the week and the pestilence travels to Denmark and Sweden, where the king believes fasting on a Friday and foregoing shoes on Sunday will please God and end the plague. It doesn’t work, killing two of the king’s brothers and moving into Russia and also eastern Greenland.
March, 1350
Scotland, having so far avoided the plague, hopes to take advantage of English weakness by amassing an army and planning an invasion. While waiting on the border to begin the attack, troops became infected, with 5,000 dying. Choosing to retreat, the soldiers bring the disease back to their families and a third of Scotland perishes.
Black Death Fades, Leaving Half of Europe Dead
1351
The plague’s spread significantly begins to peter out, possibly thanks to quarantine efforts, after causing the deaths of anywhere between 25 to 50 million people, and leading to the massacres of 210 Jewish communities. All total, Europe has lost about 50 percent of its population.
1353
With the Black Death considered safely behind them, the people of Europe face a changed society. The combination of the massive death rate and the numbers of survivors fleeing their homes sends entrenched social and economic systems spiraling. It becomes easier to get work for better wages and the average standard of living rises.
With the feudal system dying, the aristocracy tries to pass laws preventing any further rise by the peasants, leading to upheaval and revolution in England and France. Significant losses within older intellectual communities brought on an unprecedented opportunity for new ideas and art concepts to take hold, directly leading to the Renaissance and a more youthful, enlightened period of human history.
The Bubonic Plague never completely exits, resurfacing several times through the centuries.
1665
THE GREAT PLAGUE OF LONDON, 1665
The Great Plague of London in 1665 was the last in a long series of plague epidemics that first began in London in June 1499. The Great Plague killed between 75,000 and 100,000 of London’s rapidly expanding population of about 460,000.
The Great Plague of London, lasting from 1665 to 1666, was the last major epidemic of the bubonic plague to occur in England.
First suspected in late 1664, London’s plague began to spread in earnest eastwards in April 1665 from the destitute suburb of St. Giles through rat-infested alleys to the crowded and squalid parishes of Whitechapel and Stepney on its way to the walled City of London.
THE GREAT PLAGUE AT ITS PEAK
By September 1665, the death rate had reached 8,000 per week. Helpless municipal authorities threw their earlier caution to the wind and abandoned quarantine measures. Houses containing the dead and dying were no longer locked. London’s mournful silence was broken by the noise of carts carrying the dead for burial in parish churches or communal plague pits such as Finsbury Field in Cripplegate and the open fields in Southwark.
Well-off residents soon fled to the countryside, leaving the poor behind in impoverished and decrepit parishes. Tens of thousands of dogs and cats were killed to eliminate a feared source of contagion, and mounds of rotting garbage were burned. Purveyors of innumerable remedies proliferated, and physicians and surgeons lanced buboes and bled black spots in attempts to cure plague victims by releasing bad bodily humors.
Plague Orders, first issued by the Privy Council in 1578, were still effective in 1665. These edicts prohibited churches from keeping dead bodies on their premises during public assemblies or services, and carriers of the dead had to identify themselves and could not mix with the public.
Between 1665 and 1666, London saw one of its worst outbreaks of the plague since 1348. The government eventually introduced public health measures that restricted people’s movement in a way that would not be seen in London again until the COVID-19 outbreak in 2020. Now, just a couple of years after the outbreak of COVID, and while much of the world is still feeling its lasting effects, it is especially interesting to consider how people in the past dealt with very similar public health
Death Travels up the River: The Great Plague Arrives in London
Two women lying dead in a London street during the great plague, 1665, one with a child who is still alive. Etching after R. Pollard II, via the Wellcome Collection, London
London was well acquainted with the plague by the seventeenth century, as there had been several outbreaks in the city since 1348. Despite this, the Great Plague of London in 1665-1666 was perhaps the worst. Estimates report that about 15% of London’s population was lost, and while official records report 68,596 deaths, it is more accurate to assume that the number was probably over 100,000.
The disease first arrived in 1665 in St-Giles-in-the-Fields, a parish just outside the city walls. It then spread into the heart of the city, and by September, it was reported that 7,165 people in London had died in just one week. Transported through the overcrowded city by rats infected with the bacterium Yersinia pestis to the seventeenth-century inhabitants of London, the disease appeared to not discriminate against the victims it chose.
In the end, the only thing that stopped the plague in 1666 was the Great Fire of London, which ripped through the city and destroyed a lot of the infrastructure as well as the infected rats and fleas. Prior to this, the local government had attempted to put in place some public health measures in order to prevent the spread of the disease.
Seventeenth-Century Lockdown
A street during the plague in London with a death cart and mourners, color wood engraving by E. Evans, via the Wellcome Collection, London
One of the two main ways the government attempted to control the spread of the disease was by “shutting up” houses. This early concept of quarantine was born in 14th century Venice, where ships were held at ports for 40 days after their arrival to ensure that they did not bring disease into the city. The word quarantine comes from the Italian quaranta giorni, which translates to “forty days.”
The concept has evolved over time, and it is now defined in the Cambridge dictionary as “a specific period of time in which a person or animal that has a disease, or may have one, must stay or be kept away from others in order to prevent the spread of the disease.”
The concept was loosely employed in seventeenth-century England. In 1630, the Privy Council in London ordered that any houses infected with plague be “shut up.” The process began when someone passed away. Government-appointed “searchers” would be sent out in order to ascertain how the individual had died. If it were understood to be the plague, the house would be “shut up.”
Understandably, the thought of being locked in their own homes and left either to die of the plague or to catch it off a fellow family member did not appeal to most. It was therefore common for individuals who were aware they had the plague before the searchers were sent to disguise their malady. Those who were wealthy enough sometimes even resorted to bribery to avoid being locked in their house to eventually die. Because these searchers were often older, poorer women, they were extremely likely to take these bribes.
Two men discovering a dead woman in the street during the great plague of London, wood engraving by J. Jellicoe after H. Railton, via the Wellcome Collection, London
In order to ensure their rules were adhered to, guards were placed outside the doors of such houses to ensure that no one left. The local constable padlocked the doors to the homes; they were then marked with a red cross with the words “Lord had mercy upon us” written alongside it. This was done to prevent people from entering the home and warn others that those inside were infected.
The law stated that this quarantine should last 20 days; however, this period was extended if one of the individuals inside passed away. During this period, the houses marked with these crosses were looked upon with immense fear. There were few offers of help from the outside, and Samuel Pepys, a London resident at the time, reports that “…a gentleman walking by called to us to tell us that the house was shut up of the sickness. So we with great affright turned back, being holden to the gentlemen; and went away.”
There were also reports of escapes. Naturally, healthy people did not like the idea of being locked within an infected house for 20 days, where they would more than likely catch the disease themselves. Eventually, these orders developed into sending infected people to Pest Houses.
More Extreme Measures: The Pest Houses
Pest house (isolation hospital in times of plague), Tothill Fields, Westminster, London, c. 1840, via the Wellcome Collection, London
Alongside at-home quarantine, the Privy Council employed another method to control the plague’s spread: Pest Houses. The Earl of Craven stated that shutting up families within their homes was inhumane and ineffective. He argued for the use of Pest Houses, which were effectively isolation hospitals where sick people, or those who had been in contact with the disease, could be taken until they recovered.
If searchers who were sent out to look for people who had the disease but had not identified themselves discovered someone with the plague, they could send the suffering individual to the local Pest House rather than back to their own house to isolate
It was up to the families whether they would move with an infected relative to the Pest House or stay at their home and quarantine. If the entire family went to the Pest House, then the infected home would be quarantined. The door would be marked with a red cross; however, no inscription would be made in order to show that the house was empty. Again, guards would be stationed outside the house to ensure that no one entered them or looted them.
Records that survive from the period and depict the construction of Pest Houses show that they were made up of two buildings: one for the infected and one for the healthy but exposed. They were both designed in the same fashion: tall stone walls and big windows. The big windows were to ensure airflow and to release any miasmas (bad smells) from the buildings as it was believed they were what caused disease.
These establishments were ruled over by a master or mistress, who, in turn, employed nurses and watchmen. The gates around the property were locked to prevent people from escaping.
Masks
A physician wearing a 17th-century plague preventive, via theWellcome Collection, London
The use of masks was also employed during the outbreak of the plague, but not in the way we may assume. Normal individuals were not wearing masks; doctors were. The plague mask has become a distinct visual of early modern medicine, but why was it worn?
Christian J. Mussap has accredited the introduction of this mask and the entire outfit to the French doctor Charles de Lorme. De Lorme described the beaked mask as:
“… half a foot long, shaped like a beak, filled with perfume with only two holes, one on each side near the nostrils, but that can suffice to breathe and carry along with the air one breathes the impression of the [herbs] enclosed further along in the beak.”
The reason doctors adorned this mask filled with herbs was because of the belief in miasmas, or bad smells. The dominant medical theory at the time stated that disease was spread through miasmas. Thus, by filling their masks with nice-smelling herbs, doctors ensured the disease could not be passed on to them while working with patients.
The most common substance used in the mask was theriac, a mixture of over 55 herbs and other substances like honey or cinnamon. Unfortunately for those who wore these masks, these herbs were ineffective against plague since it was actually caused by bacteria.
According to Garrett Rays Encyclopedia of Infectious Diseases, the first mention of the iconic plague doctor is found during the 1619 plague outbreak in Paris, in the written work of royal physician Charles de Lorme, serving King Louis XIII of France at the time.
Outbreak
The Great Plague of 1665 was the last major plague in England. Before the Great Plague, England had had outbreaks of plague (meaning many people got the disease) every few decades. For example:
- The 1603 plague killed 30,000 Londoners;
- The 1625 plague killed about 35,000 people; and
- The 1636 plague killed about 10,000 people.
Another Disaster Brings the End
Detail of London Scenes of the Plague, 1665-1666, via National Archives UK
Fortunately (or unfortunately) for the inhabitants of seventeenth-century London, another one of the city’s worst disasters took place in 1666. The Great Fire of London ripped through a large section of the city, thus killing off a lot of the infection. The buildings in London were made of timber with thatched rooves and built extremely close together, meaning they caught fire at an alarming rate.
The situation was only made worse by the fact that London had no organized fire brigade at this time. Attempts were made to control the fire; however, little could be done to prevent the blaze from spreading.
Arguments have been made for and against the idea that the fire halted the spread of the plague. Some, like Meriel Jeater, have argued that the plague was actually declining prior to the outbreak of the fire. Jeater contends that the fire couldn’t possibly have ended the plague because the fire only spread through about a quarter of London. Furthermore, the areas most affected by the plague, Southwark, Clerkenwell, and Whitechapel, were not touched by the fire.
Survey of the ruins caused by the Great Fire of London, 1667, via the British Library, London
There are various ways the plague could have decreased on its own. For example, because there would be a rat epidemic prior to a human epidemic, it could have reached a point where there were simply no rats left to act as reservoirs for the disease. This, combined with the fact that much of the human population had either died or fled and the fact that the colder months would have made it harder for fleas to survive, meant that there is a possibility the disease struggled to keep infecting people at the rate it once had.
Whatever caused the decline of the plague after 1666, there is still no doubt that while it terrorized London with full force, it remained a great source of fear and unease to many. Not only was the disease associated with fears of death and suffering, but also of being separated from one’s family members.
As one of the most famous fires in history, the Great Fire of 1666 swept through the capital, leaving a trail of devastation and desperate Londoners behind it.
The Great Fire Of London 1666
Indeed, when the lord mayor of London, sir Thomas bloodworth was woken up to be told about the fire, he replied pish!.
On September 2nd, 1666, a tiny spark in a bakery oven ignited the worst fire that London has ever seen. Some of the poorer houses had walls covered with tar, which kept. A major fire has broken out at London’s elephant and castle tube station with video showing violent. London fire chief resigns after Grenfell controversy. On Sunday, September 2, 1666, the fire began accidentally
Grenfell tower had a ‘stay put’ fire policy. The London fire brigade initially announced that 10 fire engines and approximately 70 firefighters were on the scene to fight the blaze, and it later updated that to 15 engines and about 100. The fire was brought under control by 3.33 p.m. A large fire ripped through a railway arch beside elephant and castle station in south London on Monday, sending a ball of flame and plumes of black smoke billowing up high into the sky above the. 020 8680 9295 • www.londonfire.net.
Some of the poorer houses had walls covered with tar, which kept.
Approximately 100 firefighters are battling the blaze near elephant and castle station, the London fire brigade said. In a tweet, the service said: And that had to seem like it was true in 1666, when fire swept through London and destroyed a massive part of the city. They urged locals to stay away and keep their windows and doors closed. A major fire has broken out at London’s elephant and castle tube station with video showing violent. And maintenance of fire equipment, including: We provide numerous community services and programs What’s stopping you from joining a world class fire service and helping us keep London safe? On September 2nd, 1666, a tiny spark in a bakery oven ignited the worst fire that London has ever seen. Indeed, when the lord mayor of London, Sir Thomas Bloodworth was woken up to be told about the fire, he replied pish! ‘Some of the poorer houses had walls covered with tar, which kept. The great fire of London was a disaster waiting to happen.
The fire gutted the medieval city of london inside the old roman city wall.it threatened but did not reach the city of westminster (today’s west end), charles ii’s palace of whitehall, and most of the suburban slums. A major fire has broken out at a railway station in london on monday afternoon. A major fire has broken out at london’s elephant and castle tube station with video showing violent.
The Great Fire of London of 1666 was the third occasion on which St Paul’s Cathedral was seriously damaged by fires in its 600 year history.
Great Fire of London, (September 2–5, 1666), the worst fire in London’s history. It destroyed a large part of the City of London, including most of the civic buildings, old St. Paul’s Cathedral, 87 parish churches, and about 13,000 houses.
On Sunday, September 2, 1666, the fire began accidentally in the house of the king’s baker in Pudding Lane near London Bridge. A violent east wind encouraged the flames, which raged during the whole of Monday and part of Tuesday. On Wednesday the fire slackened; on Thursday it was extinguished, but on the evening of that day the flames again burst forth at The Temple. Some houses were at once blown up by gunpowder, and thus the fire was finally mastered. Many interesting details of the fire are given in Samuel Pepys’s Diary. The river swarmed with vessels filled with persons carrying away as many of their goods as they were able to save. Some fled to the hills of Hampstead and Highgate, but Moorfields was the chief refuge of the houseless Londoners.
Within a few days of the fire, three different plans were presented to the king for the rebuilding of the city, by Christopher Wren, John Evelyn, and Robert Hooke; but none of these plans to regularize the streets was adopted, and in consequence the old lines were in almost every case retained. Nevertheless, Wren’s great work was the erection of St. Paul’s Cathedral and the many churches ranged around it as satellites. Hooke’s task was the humbler one of arranging as city surveyor for the building of the houses.
1707
The history of the United Kingdom began in the early eighteenth century with the Treaty of Union and Acts of Union. The core of the United Kingdom as a unified state came into being in 1707 with the political union of the kingdoms of England and Scotland,[1] into a new unitary state called Great Britain.[a] Of this new state of Great Britain, the historian Simon Schama said:
What began as a hostile merger would end in a full partnership in the most powerful going concern in the world… it was one of the most astonishing transformations in European history.[2]
1559: Queen Elizabeth I crowned. She was a Protestant queen who ruled for 44 years. It was a time of great wealth for the country, although many thousands are made homeless because of changes in land use.
1588: The Armada. A group of ships from Spain tried to invade England. They were defeated.
1592: Scottish parliament becomes Presbyterian. This is a type of Protestant Christianity, influenced by the teachings of John Calvin.
1603: The start of the Stuart dynasty. King James VI of Scotland was a close relation of the English Queen Elizabeth I. He was crowned as James I of England after her death because she has no children. It brought the two nations together (uneasily).
1642: The Civil War started. King Charles I was not a good leader and wanted money for a war with Scotland. Parliament did not want to help him. People who supported the king (Cavaliers) fought people who supported Parliament (Roundheads). About 10% of the population died in the fighting.
1649: Britain became a republic (called ‘the Commonwealth’). King Charles I had his head cut off. A military leader called Oliver Cromwell took control. He became a dictator.
1660: The Restoration of the Monarchy. Cromwell died in 1658 and his son Richard took over. He was not a good leader. Charles I’s son was invited back to the country to be King Charles II.
1665: The Great Plague of London. About 20% of London’s population died of bubonic plague.
1666: The Great Fire of London. A fire that started in a bakery destroyed 80% of the city.
1689: The Glorious Revolution. King James II (King Charles II’s brother) was unpopular – and Catholic. He fled abroad after William of Orange (the husband of his Protestant daughter Mary) came with an army. Mary and William became joint monarchs, known as William III and Mary.
1692: The Glencoe Massacre. Catholics in Scotland were told to swear their support of the new king William III (a Protestant) by January 1, 1692. The chief of the MacDonald clan did it too late. In return, 34 men, 2 women and 2 children were killed by soldiers of the Earl of Argyll on the orders of the king.
1707: Great Britain is created. The Treaty of Union between Scotland and England United Kingdom of Great Britain was made, with a British parliament in Westminster.
1714: The start of the Georgian era. Queen Anne died and her nearest Protestant relative became the new king, George I. He was from Germany. This was the start of a time of great wealth and colonial expansion.
1715: First Jacobite Rebellion. Catholics who wanted James II of England back on the throne (called Jacobites) fought Protestants who supported the new king George I. The fighting ended when the grandson of James II (known as ‘Bonnie Prince Charlie’) lost the Battle of Culloden in 1746.
1720: South Sea Bubble. Thousands of people went bankrupt and many took their own life when the price of shares in the South Sea Company collapsed.
1780s: The Highland Clearances. Over 100 years, people in Highland Scotland were forced from their villages and farms so the land could be used for sheep. Thousands of people emigrated, many to Ireland or North America.
1798: The Irish Rebellion. Irish people fought against British rule, with support from the French. Nearly 30,000 people died. Eventually, the British won.
1801: The UK is created. Because of the Irish rebellion, Britain dissolved the Irish parliament and moved its responsibilities to the British parliament. This created the United Kingdom of Great Britain and Ireland.
1825: The first passenger railway is built. It goes between Stockton and Darlington. Soon there were railways nearly everywhere. Many were shut in the 1960s.
1834: Abolishment of slavery. Slavery becomes illegal across most of the British Empire after a new law is passed. There was a transitional period that lasted until 1838. Some areas had to wait until 1843: St Helena, Ceylon (now Sri Lanka) and places in India controlled by the East India Company. A new system of ‘indentured labourers’ was introduced to replace slavery; for many people it was not much better.
1837: The start of the Victorian era. During the reign of Queen Victoria, the British Empire grew until it had a population of over 400 million people. It included countries like India, Australia and much of Africa. Most of these countries are now independent.
1845-50: Irish Potato Famine. Over 1 million people died and about 1 million emigrated when a disease destroyed potatoes, the only food of the poor. During this time, many other foods were grown and sent to Britain. This made Ireland even more determined to become independent.
1851: The Great Exhibition. This trade fair in London showed 100,000 of the most amazing objects from the British Empire. It was held in a very big glass building called the ‘Crystal Palace’ and was visited by 6 million people, including Queen Victoria.
1901: The start of the Edwardian era. After Queen Victoria’s death, her son became King Edward VII. He died in 1910, but the ‘Edwardian era’ is often considered to last until 1914. Britain changed a lot after World War 1, so the Edwardian era marks the last days of the British Empire and the social system of large country houses and servants.
1903: The Suffragettes. For 11 years, women from the Women’s Political and Social Union (called ‘Suffragettes’) fought for women to get the vote. After World War I, women over 30 who own property are allowed to vote. In 1928, everyone over 21 was allowed to vote.
1914–18: World War 1. The war brought social change because women had to do the jobs of the men while they were fighting. Men from many other countries also helped Britain as part of the Allied Powers.
1921: The Catholic southern part of Ireland declared independence from Britain. It became a republic in 1949. Six mainly Protestant counties in the north stayed with Britain and became Northern Ireland (sometimes called ‘Ulster’). Protestants were usually of English or Scottish descent, while Catholics were usually of Irish descent. The impact is still felt today.
1939–45: World War 2. Famous moments included evacuating British soldiers from Dunkirk in France (1940), the Battle of Britain (German air attacks stopped by British pilots, 1940), the Blitz (bombing raids on British cities, 1940-41), and D-Day/Normandy Landings (when the US, Canada and UK invaded German-occupied France, 1944).
1948: The Windrush generation. People from the West Indies were invited to help Britain rebuild after the war or work in the NHS. Over the next decades, workers were invited from many other countries (including India, Pakistan and Bangladesh).
1951: Festival of Britain. An exhibition in London that celebrated British industry, art and science.
1966: Conflict in Northern Ireland. Over 30 years of violence and bombing (known as ‘the Troubles’) start because of tension between Unionists (mostly Protestant, who want Northern Ireland to stay with Britain) and Nationalists/Republicans (mostly Catholic, who want Northern Ireland as part of the Republic of Ireland). A peace deal was signed in 1998, which gave Northern Ireland its own locally-elected government.
1966: England wins the football World Cup. They won 4-2 against Germany.
1972: Bloody Sunday. British troops kill 14 civil rights protestors in Derry, Northern Ireland.
1973: The Three-Day Week. Strikes by coal miners meant there was not enough fuel for power stations. For two months, companies could only use electricity three days a week.
1973: Britain joined the EEC. It was an early version of the European Union.
1978/79: Winter of Discontent. Over 4 million people went on strike, including gravediggers, hospital staff, lorry drivers and rubbish collectors.
1981: Brixton Riots. There were riots in London and some other cities in reponse to racism by police.
1992: The Channel Tunnel opens. It links UK to France by road and rail.
1997: Death of Princess Diana. The Princess was much loved by the public, so her death at such a young age upset many people.
2005: Civil partnerships became legal. Same-sex couples gained the same rights as married couples.
2012: Queen’s Diamond Jubilee. There were celebrations because Queen Elizabeth had been queen for 60 years. London also hosted the Olympic Games.
2016: Brexit vote. 52% of the UK voted to leave the European Union (though in London, Scotland and Northern Ireland most people wanted to stay).
2020: Britain left the European Union.
2022: Queen’s Platinum Jubilee and Death of Queen Elizabeth II. National celebrations took place in June to recognise Queen Elizabeth II’s 70 years on the throne. Sadly, she died a few months later, in September. Her son became King Charles III.
”Behind it all lies the City of London, anxious to preserve its access to the world’s dirty money. The City of London is a money-laundering filter that lets the City get involved in dirty business while providing it with enough distance to maintain plausible deniability . . . a crypto-feudal oligarchy which, of itself, is. . . captured by the international offshore banking industry. It is a gangster regime, cloaked with the “respectability” of the trappings of the British establishment. . . . guaranteed protection. No matter just how nakedly lawless their own conduct.”
“The City is often now described as the largest tax haven in the world, and it acts as the largest center of the global tax avoidance system. An estimated 50% of the world’s trade passes through tax havens, and the City acts as a huge funnel for much of this money.”
The dragons of the City of London
Since the early 17th century a pair of dragons has supported the crest of the City of London in its coat of arms; and in the latter part of the 19th century ornamental boundary markers were erected at points of entry into the City, each surmounted by a dragon clutching the heraldic shield.
The dragons’ introduction seems to have derived from the legend of St George – whose cross has been a City emblem since at least the early 14th century – and may have been specifically linked to a popular misconception that a fan-like object bearing the cross on an earlier crest was a dragon’s wing.
The City dragon is often incorrectly called a griffin, or gryphon, even in some official literature. It is not clear how the confusion arose but the misnomer has become so entrenched that some authorities consider it to have earned a degree of legitimacy. This especially applies to the statue at Temple Bar, where Westminster’s Strand becomes the City’s Fleet Street. The term ‘east of the Griffin’ was once commonly employed to mean ‘east of Temple Bar’, i.e. in the City of London:
“If something unexpected did not happen, it meant another visit to a little office he knew too well in the City, the master of which, more than civil if you met him on a racecourse, … was quite a different person and much less easy to deal with east of the Griffin.”
Where do the City of London dragons come from?
The origin of the dragons isn’t clear. Some say that they come from the story of George and the Dragon as St George is the patron saint of England. The sword and the dragons certainly distinguish the coat of arms of the City of London from those of England. In England, we associate St George with slaying the Dragon and indeed dragons guard the City of London and mark out the different gates around the city i.e Aldersgate, Bishopsgate, Temple Bar, Bridge Gate and Moorgate. As with dragons being used in stories to protect something i.e.; Smaug in J.R.R Tolkein’s 1937 novel The Hobbit protects Erebor (for himself) are the City of London dragons there to protect the city?
The City of London is the original heart of London having been established by the Romans in 55BC. The City of London is surrounded by dragons. The photo above is of one of the two dragons original statues from the Coal Exchange. These two dragons lived under the entrance to the Coal Exchange in Lower Thames Street until when it was demolished in 1863. After that they took up residence on either side of the Victoria Embankment.
Why do dragons guard the City of London?
The ancient City of London is protected by dragons that guard the main roads into the city from perfidious invaders. To help manage the trade in coal, in 1847 a grand building was constructed near the Tower of London, and high above the main entrance, two plinths held two large cast-iron dragons.
Where can I see dragons in London?
These original dragons can be found today at the Victoria Embankment. Half size replicas can be found at High Holborn, Farringdon Street, Aldersgate, Moorgate, Bishopsgate, Aldgate, London Bridge and Blackfriars Bridge. Other buildings in the City are home to the dragons as well.
What are the boundaries of the City of London?
Marked with cast iron dragons in the street, the boundaries of the City of London stretch north from Temple and the Tower of London on the River Thames to Chancery Lane in the west and Liverpool Street in the east.
How many dragon statues are in London?
There are thirteen Dragons around the City of London. Half-size replicas of the original pair of dragons made by Birmingham Guild Limited were erected at main entrances to the City of London in the late 1960s.
Who controls London City?
The corporation is headed by the Lord Mayor of the City of London
What is the earliest image of the crest of the City of London?
The oldest known image of a crest is from 1539 when they are used on the reverse of the common seal of the City. The oldest image, however, is not very clear, by the end of the C17th, the crest had developed into the dragon wing.
Description and blazon
The Corporation of the City of London has a full achievement of armorial bearings consisting of a shield on which the arms are displayed, a crest displayed on a helmet above the shield, supporters on either side and a motto displayed on a scroll beneath the arm.
The blazon of the arms is as follows:
Arms: Argent a cross gules, in the first quarter a sword in pale point upwards of the last.
Crest: On a wreath argent and gules a dragon’s sinister wing argent charged on the underside with a cross throughout gules.
Supporters: On either side a dragon argent charged on the undersides of the wings with a cross throughout gules.
The Latin motto of the City is Domine dirige nos, which translates as “Lord, direct (guide) us”. It appears to have been adopted in the 17th century, as the earliest record of it is in 1633.
A banner of the arms (the design on the shield) is flown as a flag of the City.
Armorials of the Great Twelve Livery Companies of the City of London
The dragon boundary marks are cast iron statues of dragons (sometimes mistaken for griffins) on metal or stone plinths that mark the boundaries of the City of London.
The dragon boundary marks are cast iron statues of dragons (sometimes mistaken for griffins) on metal or stone plinths that mark the boundaries of the City of London.
In fact, the City of London is so independent that it has its own flag, crest, police force, ceremonial armed forces, and a mayor who has a special title, the Right Honorable, the Lord Mayor of London. Oddly enough, if the monarch wants to enter the City of London, she first must ask the Lord Mayor for permission.
Explained: The secret City of London which is not part of London
The famed English author, poet, and literary critic Samuel Johnson once said that “when a man is tired of London, he is tired of life; for there is in London all that life can afford.” Say this in 2017 and it still holds true, as London continues being one of the most visited cities in the world.
Known for its amazing bridges, modern buildings, and beautiful historic landmarks such as the Tower of London, Westminster Abbey, Houses of Parliament, and Buckingham Palace, London feels like the center of the world and, according to some, it is the world’s financial capital.
While most people will tell you that London is one great big city, the truth is that there is one more London that it is inside of London.
The City of London situated within the city called London is actually the original London. However, to make things clearer, we need to go back nearly 2,000 years in history, to when the Romans invaded Britain and founded the settlement of Londinium.
When the Romans arrived in Britain in 43 AD, there was no permanent settlement on the site of the City of London but it didn’t take long before Londinium was established. Its access to the River Thames transformed the new settlement into an important trading center. The town began growing rapidly.
About two centuries after its establishment, Londinium was already a large Roman city, with a population of over 10,000 people. It was one of the most important trade centers in the Roman Empire, and the Romans took good care of it, constructing forts for protection, including the gigantic London Wall, parts of which can be still seen today.
A model of London in 85 to 90 AD on display in the Museum of London, depicting the first bridge over the Thames. Author: Steven G. Johnson. CC BY-SA 3.0
The London Wall defined the shape and size of the city. But what is more interesting, the City of London remained within the wall over the course of more than 18 centuries and didn’t extend beyond.
Life in Londinium continued after the Romans left and even though the city experienced some hard times and fell into a decline, its location proved to be so good, it was brought back to its former glory. Trade thrived again and the city grew both economically and in population.
The prosperous trading center didn’t go unnoticed by William the Conqueror, who decided not to attack it and instead came in a friendly fashion to London, offering its citizens some privileges and recognizing their liberties, but in return, they were asked to recognize him as the new King.
The citizens of London did recognize William the Conqueror as the new King and gained their authority and liberties. Throughout the years, many monarchs rose and fell, but the City of London and the liberties of its citizens remained intact.
A surviving fragment of the London Wall behind Tower Hill Station (2005). Author: John Winfield. CC BY-SA 2.0
Although some monarchs saw the city as a threat, thinking that it was too independent, powerful, and rich, none of them made an attempt to subordinate the City of London to its rule. The area was not dependent on another power and it had the sovereignty to govern, tax, and judge itself.
Westminster was built nearby with the purpose of competing with the powerful City of London and this was when the second London was born. The new city expanded rapidly and it eventually surrounded the City of London.
By 1889 the County of London was formed and the name was more often used for the larger area that surrounded the original City of London.
Top 10 Facts About The City Of London
London covers 600 square miles and has a population of 8.6 million, but only its oldest part, just one square mile in size, is called the City of London. That is where the Romans founded the city of Londinium shortly after they arrived in 43AD. Today, the City of London – or simply “the City” – is the centre of London’s finance industry. The City is where you find the Bank of England, the London Stock Exchange, the investment banks, insurance companies and financial markets. It combines the most modern headquarters buildings with Roman remains and medieval churches, and it is great to explore on foot. Here are 10 facts about the City:
- Geography plays a key role in the success of the City of London. Unlike New York, Tokyo or Hong Kong, the City’s business day overlaps those of all the world’s financial centres. The City can trade with the Eastern Hemisphere in the morning and the Western Hemisphere in the afternoon, allowing its dealers to trade in all major markets in one day.
- Though it is the oldest part of London, the City doesn’t look very old. That is because it has been almost entirely destroyed and rebuilt twice: once in the Great Fire of London in 1666, and then again after being bombed in the second World War. It is fascinating to track down the structures which survived the Great Fire and World War 2.
The boundaries for the City of London is marked with dragons. Photo Credit: ©Ursula Petula Barzey.
- Everywhere you look there are new buildings being constructed in the City of London, many by the world’s leading architects. About a quarter of the buildings are replaced every 25 years.
- Before 1980, any bank operating in the City of London had to have an office within 10 minutes walk of the Bank of England. This was because in the event of a crisis, the Governor of the Bank of England wanted to have the Chief Executive of every bank in the City in his office within 30 minutes.
A view of the City of London from the Thames River. In the forefront is 20 Fenchurch Street know as the Walkie Talkie Building. In the background is the The Leadenhall Building known as “The Cheesegrater” because of its distinctive wedge shape. Photo Credit: ©Nigel Rundstrom.
- The Bank of England was devised by a Scot, William Patterson, and its first Governor was a Frenchman, John Houblon. It is the central bank for the whole of the United Kingdom, and yet it is called the Bank of England.
- There are over 500 banks resident in the City of London, most of them foreign. The City has more Japanese banks than Tokyo and more American banks than Manhattan.
In the City of London new skyscraper buildings spring up next to old landmarks. Here, 30 St Mary Axe known as the The Gherkin which has been built next to St. Mary Axe a mediaeval church. Photo Credit: ©Nigel Rundstrom.
- The City of London is the centre of global foreign exchange dealing. Over 40% of all the world’s foreign exchange transactions are made in the City – a total of $2.7 trillion per day!
- Some of the City’s most famous institutions started out in coffee houses at the end of the 17th century. Jonathan’s and Garraway’s coffee houses in Exchange Alley saw the first buying and selling of company stocks. Edward Lloyd’s coffee house was where ships and their cargoes could be insured, leading to the foundation of Lloyds of London.
- City life used to be dominated by the Livery Companies, trade guilds who trained craftsmen, set standards and controlled the practice of trades. Over one hundred still exist today even though the trades they represented have vanished from the City of London. Many still occupy grand “livery halls” and survive as social and charitable institutions.
- There are more international telephone calls made from the City of London than anywhere else in the world. This shows the truly global nature of the businesses which operate there. The City of London contains a mix of modern and historical buildings, traditions and stories. A Blue Badge Tourist Guide can show you the hidden parts of the City of Londonalong with the main sights. To make sure you see the full range of London sights a tour of the City of London is a must.
The City Of London (Aka The Crown) Is Controlling The World’s Money Supply
The ‘Crown’ is not owned by the Westminster and the Queen or England.
The City of London has been granted various special privileges since the Norman Conquest, such as the right to run its own affairs, partly due to the power of its financial capital. These are also mentioned by the Statute of William and Mary in 1690.
City State of London is the world’s financial power centre and wealthiest square mile on earth — contains Rothschild controlled Bank of England, Lloyd’s of London, London Stock Exchange, ALL British banks, branch offices of 385 foreign banks and 70 U.S. banks.
It has its own courts, laws, flag and police force — not part of greater London, or England, or the British Commonwealth and PAYS ZERO TAXES!
City State of London houses Fleet Street’s newspaper and publishing monopolies (BBC/Reuters), also HQ for World Wide English Freemasonry and for worldwide money cartel known as The Crown…
For centuries the Bank of England has been center of the worlds fraudulent money system, with its ‘debt based’ (fiat currency).
The Rothschild banking cartel has maintained tight-fisted control of the global money system through:
- The Bank for Intl. Settlements (BIS),
- Intl. Monetary Fund (IMF) and
- World Bank — the central banks of each nation (Federal Reserve in their American colony), and satellite banks in the Caribbean.
They determine w/the stroke of pen the value of ALL currency on earth it is their control of the money supply which allows them to control world affairs (click here for Federal Reserve owners) — from financing both sides of every conflict, through interlocking directorates in weapon manufacturing co.s’, executing global depopulation schemes/ crusades/ genocide, control of food supply, medicine and ALL basic human necessities.
They have groomed their inaudibility through control of the so-called “free press” and wall themselves off w/accusations of anti-Semitism whenever the spotlight is shone upon them.
The Crown is NOT the Royal Family or British Monarch.
The Crown is private corporate City State of London — it’s Council of 12 members (Board of Directors) rule corporation under a mayor, called the LORD MAYOR — legal representation provided by S.J. Berwin.
City of London map
They are known as “The Crown.” The City and its rulers, The Crown, are not subject to the Parliament. They are a Sovereign State within a State. The City is the financial hub of the world.
It is here that the Rothschilds have their base of operations and their centrality of control:
- The Central Bank of England (controlled by the Rothschilds) is located in The City
- All major British banks have their main offices in The City
- 385 foreign banks are located in The City
- 70 banks from the United States are located in The City
- The London Stock Exchange is located in The City
- Lloyd’s of London is located in The City
- The Baltic Exchange (shipping contracts) is located in The City
- Fleet Street (newspapers & publishing) is located in The City
- The London Metal Exchange is located in The City
- The London Commodity Exchange (trading rubber, wool, sugar, coffee) is located in The City
Every year a Lord Mayor is elected as monarch of The City.
The British Parliament does not make a move without consulting the Lord Mayor of The City. For here in the heart of London are grouped together Britain’s financial institutions dominated by the Rothschild-controlled Central Bank of England.
The Rothschilds have traditionally chosen the Lord Mayor since 1820. Who is the present day Lord Mayor of The City? Only the Rothschilds’ know for sure…
How the City of London Came Into Power Inside England
MAYER AMSCHEL BAUER opened a money lending business on Judenstrasse (Jew Street) in Frankfurt Germany in 1750 and changed his name to Rothschild.
Mayer Rothschild had five sons.
The smartest of his sons, Nathan, was sent to London to establish a bank in 1806. Much of the initial funding for the new bank was tapped from the British East India Company which Mayer Rothschild had significant control of. Mayer Rothschild placed his other four sons in Frankfort, Paris, Naples, and Vienna.
In 1814, Nathanael Rothschild saw an opportunity in the Battle of Waterloo. Early in the battle, Napoleon appeared to be winning and the first military report to London communicated that fact. But the tide turned in favor of Wellington.
A courier of Nathan Rothschild brought the news to him in London on June 20. This was 24 hours before Wellington’s courier arrived in London with the news of Wellington’s victory. Seeing this fortuitous event,
Nathan Rothschild began spreading the rumor that Britain was defeated.
With everyone believing that Wellington was defeated, Nathan Rothschild began to sell all of his stock on the English Stock Market. Everyone panicked and also began selling causing stocks to plummet to practically nothing.
At the last minute, Nathan Rothschild began buying up the stocks at rock-bottom prices.
This gave the Rothschild family complete control of the British economy — now the financial centre of the world and forced England to set up a revamped Bank of England with Nathan Rothschild in control.
Ruling ‘Committee of 300′ for The ‘Crown’ names included in the London based corp. are names like:
- Rockefeller
- Gore
- Greenspan
- Kissinger
- Krugman (NYTimes)
- Powell
- Gates
- Buffet
- Bush, etc.
Why are these ‘Americans’ on a foreign committee… because the Crown STILL owns the UNITED STATES CORPORATION, private corporation!
The Lord Mayor and 12 member council serve as proxies/representatives who sit-in for 13 of the world’s wealthiest, most powerful banking families (syndicates) headed by the Rothschild Dynasty they include:
- Warburgs
- Oppenheimers
- Schiffs,
These families and their descendants run the Crown Corporation of London.
Rockefeller Syndicate runs the American colony through interlocking directorships in JP Morgan Chase/Bank of America and Brown Brothers Harriman (BBH) and Brown Brothers Harriman New York along with their oil oligarchy Exxon-Mobil (formerly multi-headed colossus Standard Oil).
They also manage Rothschild oil asset British Petroleum (BP). The Crown Corporation holds title to world-wide Crown land in Crown colonies like Canada, Australia, New Zealand and many Caribbean Islands.
British parliament and British PM serve as public front for hidden power of these ruling crown families.
“Today the path to total dictatorship in the U.S. can be laid by strictly legal means… We have a well-organized political-action group in this country, determined to destroy our Constitution and establish a one-party state…
“It operates secretly, silently, continuously to transform our Government… This ruthless power-seeking elite is a disease of our century… This group… is answerable neither to the President, the Congress, nor the courts. It is practically irremovable.” — Senator William Jenner, 1954 speech
Broken US-Indigenous treaties: A timeline
As long as the United States has negotiated treaties with Indigenous nations, it has broken those treaties. There is a popular tendency to think of these treaties as inanimate artifacts of the distant past.
This belief, however, is a symptom of the historical amnesia that continues to relegate present-day Indigenous rights issues to the margins. Treaties are, in fact, living documents, which even today legally bind the United States to the promises it made to Native peoples centuries ago. Treaties also acknowledge the inherent sovereignty of Indigenous nations, a fact that has been disputed and undermined in U.S. courts and Congress since 1831, when the Supreme Court ruled that tribes were “domestic dependent nations” without self-determination.
Of the nearly 370 treaties negotiated between the U.S. and tribal leaders, Stacker has compiled a list of 15 broken treaties negotiated between 1777 and 1868 using news, archival documents, and Indigenous and governmental historical reports.
Treaty With the Delawares/Treaty of Fort Pitt (1778)
The 1778 Treaty with the Delawares was the first treaty negotiated between the newly formed United States and an Indigenous nation. The Lenape (Delaware) were already being forced from their ancestral homelands in New York City, the lower Hudson Valley, and much of New Jersey when the Dutch settled there in the 17th century. The treaty stipulated peace between the Lenape and the U.S. as well as mutual support against the British. However, this supposed peace did not last long: In 1782, Pennsylvania militiamen murdered almost 100 Lenape citizens at Gnadenhutten, forcing the Lenape out toward Ohio.
Treaty of Fort Stanwix (1784)
Weakened by the constant encroachment of white settlers after the Revolutionary War, the Iroquois Confederacy was forced to cede part of New York and a large portion of present-day Pennsylvania in the Treaty of Fort Stanwix. In return, the U.S. promised to protect tribal lands from further settlement by white colonists. In the following years, the U.S. did not enforce the treaty terms, and the lands inhabited by the Iroquois Confederacy continued to shrink.
Treaty of Hopewell (1785-86)
The Treaty of Hopewell includes three treaties signed by the U.S. and the Cherokee, Choctaw, and Chickasaw Nations at General Andrew Pickens’ plantation following the Revolutionary War. The treaties supposedly offered the three tribes the protection and friendship of the U.S. and promised no future settlement on tribal lands. Despite these terms, the encroachment of white settlers onto treaty territory was already underway, and future treaties would shrink Cherokee, Choctaw, and Chickasaw lands even further.
Treaty of Canandaigua/Pickering Treaty (1794)
In 1794, the U.S. government and the Haudenosaunee Confederacy, or Six Nations (comprising the Mohawk, Cayuga, Onondaga, Oneida, Seneca, and Tuscarora Nations of New York), signed the Treaty of Canandaigua. In exchange for the Confederacy’s allyship after the Revolutionary War, the U.S. returned over a million acres of Iroquois land that had been previously ceded in the Fort Stanwix Treaty. The Canandaigua Treaty also recognized the sovereignty of the Six Nations to govern themselves and set their own laws.
Despite this apparent act of friendship, the land returned to the Six Nations was lost to U.S. expansion, and the tribes were forced to relocate. While the Onondaga, Seneca, Tuscarora, and Oneida stayed on reservations in New York, the Mohawk and Cayuga moved into Canada.
Treaty of Greenville (1795)
An increasing number of white settlers moved into the Great Lakes region in the 1780s, escalating tension with established Indigenous nations. The Shawnee, Delaware, Miami, Ottawa, Ojibwe, and Potawatomi Nations banded together as the Northwestern Confederacy and assembled an armed resistance to prevent further colonization.
In 1794, a large contingent of the U.S. military, led by General “Mad” Anthony Wayne, was tasked with putting an end to the Northwestern Confederacy’s resistance. The Confederacy was defeated in the Battle of Fallen Timbers and forced to sue for peace. The Treaty of Greenville saw the tribes of the Northwestern Confederacy cede large tracts of land in present-day Michigan, Ohio, Indiana, Wisconsin, and Illinois. The treaty was soon broken, however, by white settlers who continued to expand their reach into treaty lands.
Treaty with the Sioux (1805)
In 1805, General Zebulon Pike mounted an expedition up the Mississippi River without informing the U.S. government. Pike met with a group of Dakota leaders, who allegedly ceded 100,000 acres of land to build a fort and promote U.S. trade in exchange for an unspecified amount of money. Of the seven Dakota leaders, only two signed the treaty. Though Pike valued the purchase at $200,000 in his journal, he left only $200 worth of gifts upon signing. The president never proclaimed the treaty, a necessary step that makes treaties official, and the U.S. adjusted the purchase price to $2,000.
Treaty of Fort Wayne (1809)
In the Treaty of Fort Wayne, the Potawatomi, Delaware, Miami, and Eel River tribes ceded 2.5 million acres of their lands in present-day Michigan, Indiana, Illinois, and Ohio for roughly 2 cents an acre, under pressure from William Henry Harrison, the then-governor of Indiana. Not long after, Harrison led an attack on a camp of followers of Tenkswatawa, the Shawnee Prophet, and Tecumseh, who resisted the encroachment of white settlers on the Ohio Valley Nations. The violence spurred by this attack persisted into the War of 1812.
Indian Removal Act (1830)
Though not technically a treaty, the Indian Removal Act of 1830 functioned as a displacement mechanism and was largely responsible for the treaties created over the following decades. President Andrew Jackson had long been a violent proponent of the forced relocation of Indigenous tribes from the southeast to western areas, leading military efforts against the Creek Nation in 1814 and negotiating many treaties which dispossessed tribes of their lands.
The Indian Removal Act created a process by which the president could exchange tribal lands in the eastern United States for federally designated land west of the Mississippi River by negotiating removal treaties with Indigenous nations. While the act was framed as a peaceful and voluntary process, tribes that did not “cooperate” were made to comply through military force, cheated or tricked out of their land, or subjected to the violence of local white settlers.
Treaty of New Echota (1835)
Following the passage of the Indian Removal Act, facing tremendous pressure to move west, a small group of Cherokees not authorized to act on behalf of the Cherokee people negotiated the Treaty of New Echota. The treaty gave up all Cherokee lands east of the Mississippi River in exchange for $5 million and new territory in Oklahoma. Even though most Cherokee people considered the agreement fraudulent, and the Cherokee National Council formally rejected it in 1836, Congress ratified the treaty.
Two years later, the Treaty of New Echota was used to justify the forced removal of the Cherokee people. In 1838, roughly 16,000 Cherokees were rounded up by the U.S. military and forced to march 5,043 miles to their new lands. Over 4,000 Cherokee people died on the Trail of Tears.
Treaty with the Potawatomi (1836)
In 1832, the Potawatomi Nation signed a peace treaty with the U.S. ensuring the Potawatomi people’s safety on their reservations in Indiana. Still, it wasn’t long before the U.S. broke this treaty. Further negotiations followed, but in 1836, the Potawatomi were forced to sell their land for around $14,000 and move westward. Though many Potawatomi tried to stay, in 1938, the U.S. government enforced their removal by way of a 660-mile forced march from Indiana to Kansas. Of the 859 Potawatomi people who began what would later be known as the Trail of Death, 40 died, many of whom were children.
Fort Laramie Treaty (1851)
The 1851 Fort Laramie Treaty “defined” the territory of the Great Sioux Nation (Dakotas, Lakotas, and Nakotas) in North and South Dakota, Nebraska, Wyoming, and Montana, in exchange for the creation of roads and railways and the promise of the U.S. to protect the Sioux from American citizens. Nevertheless, settlers and the U.S. military violated the treaty and invaded Lakota lands. Disputes over the treaty’s integrity persist, as evidenced by the building of the Dakota Access Pipeline, which was constructed on treaty lands near the Standing Rock Sioux Reservation. In 2016, water protectors and activists established a camp at Standing Rock to prevent the pipeline’s construction, where they were subjected to attack dogs and other methods of excessive force by law enforcement. The pipeline is still operational.
Treaties of Traverse des Sioux and Mendota (1851)
Under threat of military violence from the increasing numbers of white settler-colonists moving into Minnesota, the Dakota and Mendota were forced to cede millions of acres of land in the Treaty of Traverse des Sioux and Mendota in exchange for reservations and $1,665,000—the equivalent of about 7.5 cents per acre. However, the Dakota and Mendota never received either provision. The representatives from the U.S. government who negotiated the treaty tricked the Dakota representatives into signing a third document, which reallocated the funds meant for the Dakota and Mendota to traders to fulfill invented “debts.” The U.S. Senate further violated the treaty by eliminating the provision for reservations.
Land Cession Treaty with the Ojibwe/Treaty of Washington (1855)
In the 1855 Treaty of Washington, the Ojibwe ceded nearly all of their remaining land not already lost to the U.S. during previous treaties. This new treaty also created the Leech Lake and Mille Lacs Reservations and allotted reservation land to individual families. In doing so, the U.S. attempted to subvert the Ojibwe’s traditional relationship with the land by instating a system of private property, as well as forcing the Ojibwe people to become farmers, a departure from their historical lifestyle of hunting, fishing, and gathering. However, it was mutually agreed that the Ojibwe would be able to continue hunting and fishing on ceded territory.
Unfortunately, in the decades following the signing of the treaty, the state of Minnesota outlawed hunting and harvesting without a license on off-reservation land, a direct violation of the treaty. Despite the Supreme Court’s reaffirmation of the Ojibwe’s hunting and gathering rights on ancestral lands in 1999, conflicts over the use of these lands, including for pipeline development, are ongoing.
Medicine Lodge Treaty (1867)
Two years after the culmination of the Civil War, violence against Plains tribes instigated by westward-moving white settlers came to a head. More than 5,000 representatives of the Kiowa, Comanche, Arapaho, Kiowa-Apache, and Southern Cheyenne nations met with U.S. government delegates to ostensibly negotiate peace. Ultimately, the treaty relocated the Comanches and Kiowas onto one reservation and the Cheyennes and Arapahoes onto another. Even though the participating tribes never approved the treaty, Congress ratified it in 1868 and then quickly began violating the terms, withholding payments, preventing hunting, and cutting down the size of reservations.
In 1903, Kiowa chief Lone Wolf sued the U.S. for defrauding the tribes who participated in the Medicine Lodge Treaty. In a devastating ruling that would have grave consequences for Indigenous land rights, the Supreme Court ruled that Congress could legally “abrogate the provisions of an Indian treaty.” In other words, any treaty made between the U.S. and Native American tribes could be broken by Congress, rendering treaties essentially powerless.
Fort Laramie Treaty (1868)
The Fort Laramie Treaty was negotiated with the Sioux (Dakota, Lakota, and Nakota Nations) and the Arapaho Tribe. It established the Great Sioux Reservation, which comprised all of the South Dakota west of the Missouri River, and protected the sacred Black Hills, designating the area as “unceded Indian Territory.” It only took until 1874 for the U.S. to violate the terms of the treaty when gold was discovered in the Black Hills. The boundaries outlined in the treaty were hastily redrawn to allow white Americans to mine the area.
In the 1980 case United States v. Sioux Nation of Indians, the Supreme Court ruled that the U.S. had illegally expropriated the Black Hills, and that the Sioux were entitled to over $100 million in reparations. The Sioux turned down the money, saying that the land had never been for sale. Conflicts over the U.S.’s illegal usage of Sioux lands outlined in the Fort Laramie Treaty are ongoing. In 2018, the Rosebud Sioux Tribe and the Fort Belknap Indian Community sued the Trump administration for violations concerning the permitting of the Keystone XL Pipeline, which was shut down in June 2021
The outplacement and adoption of indigenous children
From the beginning of the colonial period, Native American children were particularly vulnerable to removal by colonizers. Captured children might be sold into slavery, forced to become religious novitiates, made to perform labor, or adopted as family members by Euro-Americans; although some undoubtedly did well under their new circumstances, many suffered. In some senses, the 19th-century practice of forcing children to attend boarding school was a continuation of these earlier practices.
Before the 20th century, social welfare programs were, for the most part, the domain of charities, particularly of religious charities. By the mid-20th century, however, governmental institutions had surpassed charities as the dominant instruments of public well-being.
As with other forms of Northern American civic authority, most responsibilities related to social welfare were assigned to state and provincial governments, which in turn developed formidable child welfare bureaucracies. These were responsible for intervening in cases of child neglect or abuse; although caseworkers often tried to maintain the integrity of the family, children living in dangerous circumstances were generally removed.
The prevailing models of well-being used by children’s services personnel reflected the culture of the Euro-American middle classes.
They viewed caregiving and financial well-being as the responsibilities of the nuclear family; according to this view, a competent family comprised a married couple and their biological or legally adopted children, with a father who worked outside the home, a mother who was a homemaker, and a residence with material conveniences such as electricity.
These expectations stood in contrast to the values of reservation life, where extended-family households and communitarian approaches to wealth were the norm.
For instance, while Euro-American culture has emphasized the ability of each individual to climb the economic ladder by eliminating the economic “ceiling,” many indigenous groups have preferred to ensure that nobody falls below a particular economic “floor.”
In addition, material comforts linked to infrastructure were simply not available on reservations as early as in other rural areas.
For instance, while U.S. rural electrification programs had ensured that 90 percent of farms had electricity by 1950—a tremendous rise compared with the 10 percent that had electricity in 1935—census data indicated that the number of homes with access to electricity did not approach 90 percent on reservations until 2000.
These kinds of cultural and material divergences from Euro-American expectations instantly made native families appear to be backward and neglectful of their children.
As a direct result of these and other ethnocentric criteria, disproportionate numbers of indigenous children were removed from their homes by social workers. However, until the mid-20th century there were few places for such children to go; most reservations were in thinly populated rural states with few foster families, and interstate and interethnic foster care and adoption were discouraged.
As a result, native children were often institutionalized at residential schools and other facilities. This changed in the late 1950s, when the U.S. Bureau of Indian Affairs joined with the Child Welfare League of America in launching the Indian Adoption Project (IAP), the country’s first large-scale transracial adoption program. The IAP eventually moved between 25 and 35 percent of the native children in the United States into interstate adoptions and interstate foster care placements. Essentially all of these children were placed with Euro-American families.
Appalled at the loss of yet another generation of children—many tribes had only effected a shift from government-run boarding schools to local schools after World War II—indigenous activists focused on the creation and implementation of culturally appropriate criteria with which to evaluate caregiving.
They argued that the definition of a functioning family was a matter of both sovereignty and civil rights—that a community has an inherent right and obligation to act in the best interests of its children and that individual bonds between caregiver and child are privileged by similarly inherent, but singular, rights and obligations.
The U.S. Indian Child Welfare Act (1978) attempted to address these issues by mandating that states consult with tribes in child welfare cases. It also helped to establish the legitimacy of the wide variety of indigenous caregiving arrangements, such as a reliance on clan relatives and life with fewer material comforts than might be found off the reservation. The act was not a panacea, however; a 2003 report by the Child Welfare League of America, “Children of Color in the Child Welfare System,” indicated that, although the actual incidence of child maltreatment in the United States was similar among all ethnic groups, child welfare professionals continued to substantiate abuse in native homes at twice the rate of substantiation for Euro-American homes. The same report indicated that more than three times as many native children were in foster care, per capita, as Euro-American children.
Canadian advocates had similar cause for concern. In 2006 the leading advocacy group for the indigenous peoples of Canada, the Assembly of First Nations (AFN), reported that as many as 1 in 10 native children were in outplacement situations; the ratio for nonnative children was approximately 1 in 200. The AFN also noted that indigenous child welfare agencies were funded at per capita levels more than 20 percent under provincial agencies. Partnering with a child advocacy group, the First Nations Child and Family Caring Society of Canada, the AFN cited these and other issues in a human rights complaint filed with the Canadian Human Rights Commission, a signal of the egregious nature of the problems in the country’s child welfare system.
The colonization of the Americas involved religious as well as political, economic, and cultural conquest. Religious oppression began immediately and continued unabated well into the 20th—and some would claim the 21st—century.
Although the separation of church and state is given primacy in the U.S. Bill of Rights (1791) and freedom of religion is implied in Canada’s founding legislation, the British North America Act (1867), these governments have historically prohibited many indigenous religious activities.
For instance, the Northwest Coast potlatch, a major ceremonial involving feasting and gift giving, was banned in Canada through an 1884 amendment to the Indian Act, and it remained illegal until the 1951 revision of the act.
In 1883 the U.S. secretary of the interior, acting on the advice of Bureau of Indian Affairs personnel, criminalized the Plains Sun Dance and many other rituals; under federal law, the secretary was entitled to make such decisions more or less unilaterally. In 1904 the prohibition was renewed.
The government did not reverse its stance on the Sun Dance until the 1930s, when a new Bureau of Indian Affairs director, John Collier, instituted a major policy shift. Even so, arrests of Sun Dancers and other religious practitioners continued in some places into the 1970s.
Native American Tribes in the USA 2023
Did you know:
Nearly half of Native American children live in poverty, making rural reservation communities’ home to one of the minority groups with the most need in the United States.
Vacant FEMA trailers from Katrina given to Indian tribes in need of housing
Nearly six years after the hurricane, the mobile homes that became a symbol of the government’s failed response are finally being put to good use. FEMA has quietly given many of them away to American Indian tribes that are in desperate need of affordable housing.
In the aftermath of the 2005 hurricane, FEMA bought thousands of temporary homes for $20,000 to $45,000 each — both mobile homes and travel trailers.
The mobile homes proved impractical in areas where power and water service had been destroyed. And some people living in travel trailers started to fall sick because the RVs had high levels of formaldehyde, a cancer-causing chemical common in building materials.
People are still living in FEMA’s toxic trailers.
1752
1770
Bengal Famine of 1770
Yet another famine in Bengal, this horrific event killed a third of the population. Largely ruled by the English-owned East India Company, reports of severe drought and crop shortages were ignored, and the company continued to increase taxes on the region. Farmers were unable to grow crops, and any food that could be purchased was too expensive for the starving Bengalis. The company also forced farmers to grow indigo and opium, as they were much more profitable than inexpensive rice. Without large rice stocks, people were left with no food reserves, and the ensuing famine killed 10 million Bengalis.
1783
1791
The President, Directors and Company of the Bank of the United States, commonly known as the First Bank of the United States, was a national bank, chartered for a term of twenty years, by the United States Congress on February 25, 1791.
1810
From 1810 to 1917, the U.S. federal government subsidized mission and boarding schools.
By 1885, 106 Indian schools had been established, many of them on abandoned military installations. Using military personnel and Indian prisoners, boarding schools were seen as a means for the government.
US INDIAN BOARDING SCHOOL HISTORY
The truth about the US Indian boarding school policy has largely been written out of the history books. There were more than 350 government-funded, and often church-run, Indian Boarding schools across the US in the 19th and 20th centuries. Indian children were forcibly abducted by government agents, sent to schools hundreds of miles away, and beaten, starved, or otherwise abused when they spoke their native languages.
Beginning with the Indian Civilization Act Fund of March 3, 1819 and the Peace Policy of 1869 the United States, in concert with and at the urging of several denominations of the Christian Church, adopted an Indian Boarding School Policy expressly intended to implement cultural genocide through the removal and reprogramming of American Indian and Alaska Native children to accomplish the systematic destruction of Native cultures and communities. The stated purpose of this policy was to “Kill the Indian, Save the Man.”
Between 1869 and the 1960s, hundreds of thousands of Native American children were removed from their homes and families and placed in boarding schools operated by the federal government and the churches. Though we don’t know how many children were taken in total, by 1900 there were 20,000 children in Indian boarding schools, and by 1925 that number had more than tripled.
The U.S. Native children that were voluntarily or forcibly removed from their homes, families, and communities during this time were taken to schools far away where they were punished for speaking their native language, banned from acting in any way that might be seen to represent traditional or cultural practices, stripped of traditional clothing, hair and personal belongings and behaviors reflective of their native culture. They suffered physical, sexual, cultural and spiritual abuse and neglect, and experienced treatment that in many cases constituted torture for speaking their Native languages. Many children never returned home and their fates have yet to be accounted for by the U.S. government.
How Boarding Schools Tried to ‘Kill the Indian’ Through Assimilation
Native American tribes are still seeking the return of their children.
How the US stole thousands of Native American children
For decades, the US took thousands of Native American children and enrolled them in off-reservation boarding schools. Students were systematically stripped of their languages, customs, and culture. And even though there were accounts of neglect, abuse, and death at these schools, they became a blueprint for how the US government could forcibly assimilate native people into white America.
1813
Blistering, bleeding, ice cold baths: The ‘moral’ treatment of patients in America’s first ‘progressive’ private asylum founded 200 years ago
- Philadelphia Quakers founded in 1813 the first private mental hospital in the United States, The Asylum for Persons Deprived of the Use of Their Reason
- This was in response to the way mental institutions at the time were treating their patients by allowing visitors to pay to watch them restrained behind bars
- The Quaker asylum practiced a progressive method of care that came to be known as ‘moral treatment’, but some of its manifestations could be cruel.
1830
The Trail of Tears was an ethnic cleansing and forced displacement of approximately 60,000 people of the “Five Civilized Tribes” between 1830 and 1850 by the United States government.
The Cherokees moved through the Trail of Tears for six months, starting from October 1838.
The Cherokees went through a painful journey with little to no food, water, or other kinds of supplies. On the Trail of Tears, their numbers dwindled. They lost their people to starvation, dehydration, and most especially disease. The Northern Route that most of the Cherokees favored as the more practical choice did not fare well for them. Despite being given wagons and horses by the U.S. government, the trip had still been difficult as the weather made the routes impassable for wagons. This forced the elderly, women, and children to walk the snow-covered path between the Ohio and Mississippi rivers.
The Cherokees who traveled the water route also struggled.
It was summer when the U.S. government released some detainees. These Cherokee groups journeyed through boats on the water route, but the water levels were too low, forcing them to move by foot instead. The summer heat and drought made the trek even more excruciating. Disease spread through the people, costing them three to five lives each day. To this day, historians are unsure of exactly how many lives were lost in the Trail of Tears, but they estimate that almost one-fifth of the Cherokees did not make it.
Between 1830 and 1850, the U.S. government forced the Cherokee, the Choctaw, and other tribes off their ancestral lands with deadly force in what’s become known as the Trail of Tears.
Throughout the 1830s, President Andrew Jackson ordered the forced removal of tens of thousands of Native Americans from their homelands east of the Mississippi River. This perilous journey to designated lands in the west, known as the Trail of Tears, was fraught with harsh winters, disease, and cruelty.
The name came to encompass the removal of all five tribes that occupied the southeastern United States.
All tribes incurred thousands of deaths and all experienced the sorrow of being ousted from their ancestral homelands. Today, many historians view Jackson’s actions as nothing short of ethnic cleansing.
1831
The invention of electricity is a long and complex history that dates back to ancient times. The Greeks discovered static electricity around 600 BC when Thales rubbed objects and found out that static electricity could be created this way .
However, the first major breakthrough in electricity occurred in 1831 when British scientist Michael Faraday discovered the basic principles of electricity generation . He observed that he could create or “induce” electric current by moving magnets inside coils of copper wire. Building on the experiments of Franklin and others, Faraday’s discovery laid the foundation for modern electrical technology.
1837
Immanuel Nobel emigrates from Sweden to reach St Petersburg, Russia, with his family 1842.
Painting Tim Tompkins – PaintHistory.com
Alfred Nobel 16 yrs of age. 1850, Source: Wikimedia Commons.
Alfred (left), a teenager and the younger of his two elder brothers, Ludvig, photographed in St. Petersburg, probably around thware end of the 1840s.
Nikolai Nikolajewitsch Sinin [Nikolai N. Zinin] Organic Chemist and University Professor, 1812-1880, Alfred Nobel’s teacher
Ascanio Sobrero, 1812-1888, discoverer of Nitroglycerine.
Italian chemist and assistant to Alfred Nobel’s training teacher Professor J. T. Pelouze in Paris.
1812
How Many Times Has the US Officially Declared War?
The official declarations of war occurred during five separate military conflicts, starting in 1812 and, most recently, in 1942.
Known as the “second war of independence,” the War of 1812 was America’s first military test as a sovereign nation. President James Madison, angered at Great Britain’s refusal to respect America’s neutrality in the ongoing conflict between Great Britain and France, asked Congress to declare war on its former colonial overlord.
1829
July 23, 1829 – William Austin Burt, of the United States, invents and patents the typewriter, at the time called the typographer.
1839
THE 1839 CORINGA CYCLONE
(Image credit: K M Asad /Getty Images)
The Coringa cyclone made landfall at the port city of Coringa on India’s Bay of Bengal on Nov. 25, 1839, whipping up a storm surge of 40 feet (12 m), according to NOAA’s Atlantic Oceanographic and Meteorological Laboratory Hurricane Research Division. The hurricane’s wind speeds and category are not known, as is the case for many storms that took place before the 20th century. About 20,000 ships and vessels were destroyed, along with the lives of an estimated 300,000 people.
1840
1840-1842: The First Opium War -Great Britain flooded the country with opium, causing an addiction crisis. The Qing Dynasty banned the drug, and a military confrontation resulted. British forces shut down Chinese ports, and Hong Kong was handed over to them.
1845
Great Famine Ireland
One of the most famous famines in history, the Great Famine, was caused by a devastating potato disease. 33% of the Irish population relied on the potato for sustenance, and the onset of the disease in 1845 triggered mass starvation that lasted until 1853. The large Catholic population was suppressed by British rule and left unable to own or lease land or hold a profession. When the blight struck, British ships prevented other nations from delivering food aid. Ireland experienced a mass exodus, with upwards of 2 million people fleeing the country, many to the United States. At its conclusion in 1853, 1.5 million Irish were dead, and an additional 2 million had emigrated. In total, the population of Ireland shrunk by a resounding 25%.
1846
The 1846 War with Mexico started as a land dispute. In 1836, Texas won independence from Mexico to become the Republic of Texas, but Mexico never relinquished its claim on that land. So when the United States annexed Texas in 1845, tensions escalated between the northern and southern neighbors. When President James Polk sent U.S. troops to patrol the Rio Grande border, the Mexican Army attacked, giving Polk the justification he needed to ask Congress to declare war.
1847
The American Medical Association
The American Medical Association (AMA) was organized in 1847 in Philadelphia through the efforts of Nathan Davis and Nathaniel Chapman primarily to deal with the lack of regulations and standards in medical education and medical practice. Some mental hospital superintendents became active members. Cordial relations between the two groups continued, and members of each attended the others’ meetings.
1848
It All Began at Sutter’s Mill in 1848: An Overview of the California Gold Rush
On January 8, 1848, James W. Marshall, overseeing the construction of a sawmill at Sutter’s Mill in the territory of California, literally struck gold. His discovery of trace flecks of the precious metal in the soil at the bottom of the American River sparked a massive migration of settlers and miners into California in search of gold. The Gold Rush, as it became known, transformed the landscape and population of California.[1]
Arriving in covered wagons, clipper ships, and on horseback, some 300,000 migrants, known as “forty-niners” (named for the year they began to arrive in California, 1849), staked claims to spots of land around the river, where they used pans to extract gold from silt deposits.
Prospectors came not just from the eastern and southern United States, but from Asia, Latin America, Europe, and Australia as well. Improvements in steamship and railroad technology facilitated this migration, which dramatically reshaped the demographics of California. In 1849, California established a state constitution and government, and formally entered the union in 1850.[2]
Life as a Forty-Niner
Though migration to California was fueled by gold-tinted visions of easy wealth and luxury, life as a forty-niner could be brutal. While a small number of prospectors did become rich, the reality was that gold panning rarely turned up anything of real value, and the work itself was back-breaking.
The lack of housing, sanitation, and law enforcement in the mining camps and surrounding areas created a dangerous mix. Crime rates in the goldfields were extremely high. Vigilante justice was frequently the only response to criminal activity left unchecked by the absence of effective law enforcement. As prospectors dreaming of gold poured into the region, formerly unsettled lands became populated, and previously small settlements, such as the one at San Francisco, exploded.
A forty-niner panning for gold in the American River, 1850
As competition flared over access to the goldfields, xenophobia and racial prejudice ran rampant. Chinese and Latin American immigrants were routinely subjected to violent attacks at the hands of white settlers and miners who adhered to an extremely narrow view of what it meant to be truly “American.”
Illustration depicting Chinese gold prospectors during the Gold Rush
As the state government of California expanded to oversee the booming population, widespread nativist (anti-immigrant) sentiment led to the establishment of taxes and laws that explicitly targeted immigrants, particularly Chinese immigrants.[3]
Violence across the Land
As agriculture and ranching expanded to meet the needs of the hundreds of thousands of new settlers, white settlers’ violence toward Native Americans intensified. Peter Hardeman Burnett, the first governor of California, openly declared his contempt for the native population and demanded its immediate removal or extinction. Under Burnett’s leadership, the state of California paid bounties to white settlers in exchange for Indian scalps. As a result, vigilante groups of miners, settlers, and loggers formed to track down and exterminate California’s native population, which by 1890 had been almost completely decimated.
Though the Gold Rush had a transformative effect on California’s landscape and population, it lasted for a surprisingly brief period, from 1848 to 1855. It did not take long for gold panning to turn up whatever gold remained in silt deposits, and as the extraction techniques required to mine for gold became increasingly complex, gold mining became big business. As the mining industry exploded, individual gold-diggers simply could not compete with the level of resources and technological sophistication of the major mining conglomerates.
1854
The orphan trains operated between 1854 and 1929, relocating about 200,000 children.
The Orphan Train movement was an effort to transport orphaned or abandoned children from cities on the United States East Coast to homes in the newly settled Midwest.
The movement was created in 1853 by Protestant minister Charles Loring Brace, founder of the Children’s Aid Society of New York City. The orphan trains ran from 1854 to 1929, delivering an estimated 200,000 to 250,000 orphaned or abandoned children to new homes.
The orphan train movement was the forerunner of the modern American foster care system and led to the passage of child protection and health and welfare laws. The children were placed on display in local train stations, and placements were frequently made with little or no investigation or oversight.
The orphan train movement was based on the theory that the innocent children of poor Catholic and Jewish immigrants could be rescued and Americanized if they were permanently removed from depraved urban surroundings and placed with upstanding Anglo-Protestant farming families.
1856
Colonial war
The Second Opium War, also known as the Second Anglo-Sino War, the Second China War, the Arrow War, or the Anglo-French expedition to China, was a colonial war lasting from 1856 to 1860, which pitted …
1859
·
Immanuel Nobel returns to Sweden and the creditors of his insolvent workshop in St Petersburg plead to his son Ludvig to continue the operations.
1861
The American Mafia, commonly referred to in North America as the Italian American Mafia, the Mafia, or the Mob, is a highly organized Italian American criminal society and organized crime group. The o…
1862
Internal Revenue Service; Agency overview; Formed: July 1, 1862
The Ludvig Nobel Engineering Works Company was formed in St Petersburg.
1864
Ludvig Nobel in a letter to his brother, Robert: ”Petroleum has a bright future!”
·Robert Nobel starts the Lamp and lamp oil warehouse, Aurora, in Finland.
1865
The 6 most surprising reactions to Abraham Lincoln’s death
An illustration of Lincoln’s death. Universal History Archive/Getty Images
The assassination of Abraham Lincoln is widely accepted today as an American tragedy. But it wasn’t always that way.
When the news of Lincoln’s death, 150 years ago today, first reached the public, the reactions were as varied and visceral as the reactions to his life and career. Many people mourned — some even sought out his bloodied clothing and other relics. But others, in both the North and South, celebrated and reveled in the president’s death. And many people simply didn’t believe it was real.
Historian Martha Hodes examines the actual public response in her book Mourning Lincoln, delving into hundreds of letters and diaries from the time. She paints a far more complicated picture of Lincoln’s death than the one we know.
“Most of the books I’d read about Lincoln or the Civil War made these blanket statements that the nation was in mourning,” Hodes says. But the world was far more complex than that. “The Union and Confederacy were terrible antagonists, and victory or defeat didn’t repair that.”
1) Many people dismissed Lincoln’s death as just another rumor
Another illustration of Lincoln on his deathbed.
Today, we all know the bare facts about Lincoln’s assassination. Late on April 14, 1865, Abraham Lincoln was shot at Ford’s Theatre by John Wilkes Booth. On the 15th, Lincoln died. But at the time, many people didn’t even know that.
“LINCOLN IS ALIVE & WELL”
The advent of the telegraph made it relatively easy to transmit information in 1865, but Hodes notes that “rumors were still faster than the telegraph.” Because of that, it was difficult to know if the assassination had really happened. Soldiers joked about “Madam Rumor” if they didn’t take the idea seriously, and if they did, it was jumbled with other rumors, like ones that claimed General Grant had died or that Secretary of War Edwin Stanton had been killed. That problem was even worse in the South, where telegraph lines had been ravaged by the war.
Some newspapers reported on Lincoln’s assassination quickly, but it took a while for the truth to spread and be confirmed. Hodes quotes one soldier in Ohio who said the news “could be traced to no reliable source.” Making things worse, some telegraph messages were sent claiming “Lincoln is alive & well.”
Even Lincoln’s son Tad didn’t know the truth right away. The night of the assassination, he was seeing a play at Grover’s Theatre. War Department clerk James Tanner was seeing the same play, and he reported that the president had been assassinated. However, somebody else shouted that it was just a rumor being spread by pickpockets.
2) Some Southerners gleefully celebrated. But others mourned.
An (optimistic) allegory of reconciliation between the North and South.
It shouldn’t be surprising that some people celebrated Lincoln’s death. He was the symbol of a war that had just ripped apart the nation, and he was a victor whose Southern opponents were still suffering the stings of their loss. Even so, the gleeful reactions could be shocking.
Confederate lawyer Rodney Dorman called the killer “a great public benefactor” and felt relieved at Lincoln’s assassination. (In his diary, he spelled Lincoln’s name “Lincon” to emphasize the “con” he felt Lincoln was.) Some Confederates also hoped that Lincoln’s death might change the course of the war and, as one wrote, “produce anarchy in Yankeedom.” Hodes quotes one teen who took to her diary: “Hurrah!” she wrote. “Old Abe Lincoln has been assassinated.”
“ALAS! FOR MY POOR COUNTRY”
That said, celebrations weren’t universal in the South. Other reports paint a different picture — one Union soldier wrote that throughout Virginia, people mourned Lincoln’s death. Similarly, one former slave owner wrote “Alas! for my poor Country” in his diary, recording a much more charitable response to the president’s death. Others simply ignored it, since they were busy rebuilding a crippled South.
The reaction was further divided in the South by race. “It was very starkly divided between black Southerners and white Southerners,” Hodes says. Black Southerners genuinely mourned Lincoln’s death, while white Southerners felt something closer to a sense of reprieve from Union dominance, though they still worried about the future of the Confederate states.
3) Some Northerners reveled in Lincoln’s death — with ugly results
A political cartoon depicting the “Copperheads.”
It’s easy to forget that Northerners didn’t universally support the Civil War or Abraham Lincoln. There were the “Copperheads,” vocal Northern Democrats who weren’t loyal to the Union cause. “People often write about the North loving Lincoln,” Hodes says. “Lincoln’s Northern antagonists were a minority, but a significant and vocal minority.”
Hodes found records of some “good Union Men” who celebrated in the privacy of their homes when Lincoln was killed. Publicly, cities like Trenton that had a reputation for anti-Union sentiment still mourned. But privately, some reveled. A company in New Jersey “secretly rejoiced” at the news of Lincoln’s death. A woman in Bloomington, Indiana, held a “grand dinner” to celebrate. A Minnesota woman wanted to celebrate at a ball.
Yet others reacted violently against anti-Lincoln sentiment in the North. “Lincoln’s mourners wanted to put forward a universal grief, ” Hodes says, “but the fact that people in their own midst were also celebrating Lincoln’s assassination was galling and infuriating to them.” In April 1865, an anti-Lincoln man was tarred and feathered in Swampscott, Massachusetts. Dissent wasn’t tolerated well in the North, when it was publicly expressed.
4) Many of Lincoln’s mourners felt like they’d lost their best friend
Lincoln was eagerly greeted by former slaves in Richmond.
On April 4, 1865, shortly after the fall of Richmond to Union forces, President Lincoln arrived in the city with his son Tad. According to reports of the time, overjoyed African Americans circled the president. It was their praise in particular that was interesting: they called Lincoln “father” or “master Abraham.” Later they called him their “best friend.”
“THE BEST FRIEND I EVER HAD”
That sentiment carried over after Lincoln’s assassination. In a letter to the New York Afro-African newspaper, one writer said that Booth had “murdered their best friend,” and the sentiment of Lincoln as a friend was a common one among Lincoln’s sympathizers. Freedpeople called Lincoln “the best friend I ever had” and “their best earthly friend.” The New Orleans Black Republican newspaper called Lincoln the “greatest earthly friend of the colored race.”
That intimate relationship with Lincoln — one of friendship — is hard to imagine today. Hodes says that at the time, the term “best friend” had a sense more akin to familial closeness than we might think today — a best friend was someone with a truly intimate connection. For many at the time, especially the African Americans who felt they benefited from Lincoln’s friendship, it was the most appropriate term.
5) Lincoln’s death was wrapped up in Easter celebrations, and that prompted a religious response
An illustration of Lincoln’s funeral procession on April 25, 1865.
Lincoln was shot on Good Friday in 1865 and died the next day. That made Easter a particularly dramatic experience, even as rumors about Lincoln’s death circulated across the country.
Churches across the country were faced with the difficult task of celebrating Easter and mourning the death of Lincoln at the same time. As one woman wrote: “Everybody here seems trying to remember that God will bear us safely through this new & terrible trial, if we are faithful.” These meditations forced broader grappling with how God could allow Lincoln to die, and what that might mean for pious behavior on Earth. Some people even believed that Lincoln’s assassination was the only fitting capstone to the violent war.
Of course, it wasn’t only the country’s Christian majority who grappled with the assassination of Lincoln. Hodes quotes a synagogue in California whose members were “stricken with sorrow” but resolutely agreed to bow to “divine decree.”
6) People jostled for relics of his life
A postcard showing the bed where Lincoln died, perfectly preserved. This postcard is from 1931, but people sought Lincoln relics as soon as news broke that he’d been shot.
It makes sense that people would want souvenirs from Lincoln’s life. But his death prompted scrapbooking, thievery, and fervid collection. “It was part of the aftermath of immediate shock,” Hodes says. “Even after Lincoln was buried, people still wanted these relics.”
One War Department employee took a blood-soaked towel from the president, while another clerk picked up Lincoln’s bloody collar. Others made immediate pilgrimages to the theater and place where Lincoln died to see history in the making. Like those who travel to Ground Zero today, the appeal was elemental — as one woman said, it made everything “so vivid.” Hodes recalls that some people even asked for their letters of mourning to be returned to them, so they could have a record of their initial reaction to Lincoln’s death.
In a way, that points to why reactions to Lincoln’s assassination still matter today. Hodes was first drawn to the project after 9/11, when she realized how varied reactions to a traumatic event can be. “As someone who was in New York on that day, I have an entire carton of what I would call relics,” she says. “I bought postcards of the Twin Towers, and I had newspaper headlines, and I didn’t even do anything with them. It’s about preserving history in which you participated as an individual.” That’s what people did after Lincoln’s death, and we continue to have reactions that are just as powerful — and complicated — today.
1866
Birth Certificates are Federal Bank Notes
The American Bank Note Bldg. American Bank Note Company is a subsidiary of American Banknote Corporation and products range from currencies and credit cards to passports, driver’s licenses, and birth certificates.
In the USA, citizens have never obtained their original Birth Certificates — what they possess is a copy. Furthermore, these ‘copies’ have a serial number on them, issued on special Bank Bond paper and authorized by “The American Bank Note Company”. (More on this later).
Every citizen is given a number (*the red number on the Birth Certificate) and each live birth is reported to be valued at 650,000 to 750,000 Federal Reserve dollars in collateral from the Fed. Hence the saying “we are owned by the system”. LITERALLY.
FACT: The government recognizes two distinct classes of citizens: a state Citizen and a federal citizen. Learn the difference now.
“There are hundreds of thousands of sovereigns in the United States of America but I am not one of them. The sovereigns own their land in “allodium.” That is, the government does not have a financial interest in the their land. Because of this they do not need to pay property tax (school tax, real estate tax). Only the powers granted to the federal government in the Constitution for the United States of America define the laws that they have to follow.
This is a very small subset of the laws most of us have to follow. Unless they accept benefits from or contract with the federal government, they do not have to pay Social Security tax, federal income tax, or resident individual state income tax.
They do not need to register their cars or get a driver’s license unless they drive commercially. They will not have to get a Health Security Card. They can own any kind of gun without a license or permit. They do not have to use the same court system that normal people do. ~
*See below for information re: State Citizenship (How to become a…)
“Unbeknownst to most people, the class termed “US citizen” did not exist as a political status until 1866. It was a class and “political status” created for the newly freed slaves and did not apply to the people inhabiting the states of the union who were at that time state Citizens.” ~ Mr. Richard James, McDonald, former law enforcement, California
Now do the math.
If indeed 317 million US citizens are worth an average of $700,000 in collateral for the US debt, that would mean the US is worth roughly 222 Trillion dollars.
Your birth certificate is really a bank note, which means you, the citizen are what is known in the stock market as a commodity.
DEFINITION of ‘Commodity’
- A basic good used in commerce that is interchangeable with other commodities of the same type. Commodities are most often used as inputs in the production of other goods or services. The quality of a given commodity may differ slightly, but it is essentially uniform across producers. When they are traded on an exchange, commodities must also meet specified minimum standards, also known as a basis grade.
- Any good exchanged during commerce, which includes goods traded on a commodity exchange.
So if you didn’t catch it the first time, I will repeat myself at the risk of being redundant. Us citizens are owned by The United States Federal reserve, a note in the stock exchange, being traded as a commodity.
The note is printed by The American Bank Note Company. Who are they?
Background on American Bank Note Company: The following was printed to the editor by the New York times.
AMERICAN BANK NOTE COMPANY, NEW-YORK, Saturday, Dec. 2, 1865.NEW YORK TIMES
“To the Editor of New-York Times:
The attention of this company has been drawn to a paragraph from the Washington Star of the 30th ult., giving an account of some examination made at the Treasury Department as to the evidence of the surreptitious impression of a genuine plate, used by the counterfeiter in printing the backs of the spurious one hundred dollar compound interest note, having been taken after or before the plates and dies prepared by the American Bank Note Company were delivered by that company to the Treasury Department.
This paragraph states this “investigation shows that the counterfeits are made up from a plate surreptitiously obtained from this (the American Bank Note) Company.”
This statement is supported by a supposed demonstration, which to those familiar with the business will need no refutation, and which would amuse the counterfeiter, whoever he may be. And as no other reason, and no actual proof is pretended to support this imputation upon the security of plates and dies in the custody of the company, the paragraph might be left to every careful reader’s own correction.
I beg, however, the favor of stating, through your paper, that this company is ready to submit to, and give every aid to, any examination into the matter which the Treasury Department may desire; and that “experts” or plain men, upon the most thorough scrutiny, will have no doubt that the surreptitious impression was made from the genuine plate in the hands of the government, and after it was changed from the condition in which it was delivered by this company. Very respectfully, your obedient servant,
~GEO. W. HATCH, President.”
American Bank Note Company is a subsidiary of American Banknote Corporation and ABnote Group: http://abnote.com/
Today, following a variety of financial transformations, the American Banknote Corporation produces a wide variety of secure and official documents. With operations world wide its products range from currencies and credit cards to passports, driver’s licenses, and birth certificates.
How does this work?
Why didn’t you learn this in school? According to many sources, including an excerpt from researcher Brian Kelly,
“When the UNITED STATES declared bankruptcy, pledged all Americans as collateral against the national debt, and confiscated all gold, eliminating the means by which you could pay, it also assumed legal responsibility for providing a new way for you to pay, and it did that by providing what is known as the Exemption, an exemption from having to pay for anything. In practical terms, though, this meant giving each American something to pay with, and that \”something\” is your credit.
Your value to society was then and still is calculated using actuarial tables and at birth, bonds equal to this \”average value\” are created. I understand that this is currently between one and two million dollars. These bonds are collateralized by your birth certificate which becomes a negotiable instrument. The bonds are hypothecated, traded until their value is unlimited for all intents and purposes, and all that credit created is technically and rightfully yours.
In point of fact, you should be able to go into any store in America and buy anything and everything in sight, telling the clerk to charge it to your Exemption account, which is identified by a nine-digit number that you will recognize as your Social Security number without the dashes. It is your EIN, which stands for Exemption Identification Number.”
The FEDERAL RESERVE BANK is not owned and controlled by the U.S. Government, rather it is owned and operated by International families. You are owned by foreigners bankers and your (physical) body is a collateral bond that has been issued on all your future earnings, your children’s future earnings, etc. Were you taught this in school?
Why not?
If this is indeed the case, how do you change your current status?
The fact is, thousands of citizens have already changed their ‘slavery’ status by means of relinquishing the agreement and reverting to their Sovereign status (inalienable rights), the status you were born with such as constitutional rights of life, liberty, and property which are not transferable and, thus, are termed inalienable.
Questions to ask yourself:
- What type of person are you?
- What class of citizen are you?
- Can you change this status?
The Uniform Commercial Code (UCC) and the law of contracts make it difficult to conduct business in the USA. Now that you know your Birth certificate(registration of birth) is nothing more than a Contract with the government, what will you do to change this? Did they tell you you were signing a contract. Did you know that you didn’t even have to register the birth?
1868
1869
The first published account of what became the Mafia in the United States dates to the spring of 1869. The New Orleans Times reported that the city’s Second District had become overrun by “well-known and notorious Sicilian murderers, counterfeiters and burglars, who, in the last month, have formed a sort of general co-partnership or stock company for the plunder and disturbance of the city.”
Emigration from southern Italy to the Americas was primarily to Brazil and Argentina, and New Orleans had a heavy volume of port traffic to and from both locales.
1870
·
Ludvig Nobel is awarded the privilege to use the imperial Russian herald, the double eagle.
The first automobile suitable for use on existing wagon roads in the United States was a steam-powered vehicle invented in 1871 by Dr. J.W. Carhart, a minister of the Methodist Episcopal Church, in Racine, Wisconsin.
In the 1870s, exhibitions of so-called “exotic populations” became popular throughout the western world. Human zoos could be seen in many of Europe’s largest cities, such as Paris, Hamburg, London, Milan as well as American cities such as New York.
1871
THE UNITED STATES BECAME A FOREIGN CORPORATION IN 1871
June 24, 2016 TLB Staff Crown Temple B.A.R., FINANCIAL, FREEDOM, Spotlight, Tyranny 40
By: TLB Staff Writer | David-William June 24, 2016
Every day the amount of people learning that the “united states of America,” the representative Republic that it was, died when the Southern States abandoned Congress forever in 1861, is increasing. Along with the pain of finding that you’ve been steeped in lies throughout your entire education, there’s still plenty more to cry about as you start putting the pieces together.
Seriously, after this Article, you’ll feel the temptation to learn enough more to go slap your kids’ teachers and professors around until they promise to make some kind of change or resign. Trust me, if we’re sending our kids to public school today, we’re not doing them any good. Sorry for the harsh words, but when we find out that the schools are really indoctrinating our kids to be slaves, and stupid ones at that, yet we do it anyway, it’s time to rethink our philosophy.
When we fail to plan, we plan to fail. How can we say our kids have great teachers while they’re teaching them bull$#!+ and lies? That’s what we think when we don’t know the truth either!! OUCH!!! Today’s teachers are shoveling propaganda during an age of information with the real facts thumbing them in the eyes. There are no excuses and there’s no reason to hear any. If it comes from public schools, public agencies, public MEDIA, or public servants, it’s a stinking lie! Even the term public means private if it’s connected to THE UNITED STATES.
_______
A MESSAGE FOR ANYONE WHO IS CRAZY ENOUGH TO CLAIM U.S. CITIZEN STATUS
“Then, by passing the Act of 1871, Congress formed a corporation known as THE UNITED STATES. This corporation, owned by foreign interests, shoved the organic version of the Constitution aside by changing the word ‘for’ to ‘of’ in the title. Let me explain: the original Constitution drafted by the Founding Fathers read: ‘The Constitution for the united states of America.’ [note that neither the words ‘united’ nor ‘states’ began with capital letters] But the CONSTITUTION OF THE UNITED STATES OF AMERICA’ is a corporate constitution, which is absolutely NOT the same document you think it is. First of all, it ended all our rights of sovereignty [sui juris]. So you now have the HOW, how the international bankers got their hands on THE UNITED STATES OF AMERICA.”
“As an instrument of the international bankers, the UNITED STATES owns you from birth to death. It also holds ownership of all your assets, of your property, even of your children. Think long and hard about all the bills taxes, fines, and licenses you have paid for or purchased. Yes, they had you by the pockets. If you don’t believe it, read the 14th Amendment. See how ‘free’ you really are. Ignorance of the facts led to your silence. Silence is construed as consent; consent to be beneficiaries of a debt you did not incur. As a Sovereign People we have been deceived for hundreds of years; we think we are free, but in truth we are servants of the corporation.”
THE UNITED STATES is the Vatican! They’re the Jesuits and the Zionists all rolled into one big rat’s nest! They’re the ones running the Pentagon, murdering everyone. They’re not about God! They work for Satan. They have been pumping your heads full of septic stew and they’re getting you to foot the bill! They’re the ones who own the I.R.S., so wake up.
The Grisly Story of One of America’s Largest Lynching
The Biggest Mass Lynching In The United States, L.A., 1871
“Eleven men lay dead. The other eight Italian prisoners spared, either because they had not been found or someone had vouched for their innocence. For those who still had not seen enough, arrangements were made for small groups of ten to fifteen spectators each to pass through the prison to witness the vigilantes’ handiwork.” (Persico)
Bodies of some of the lynched Italian Americans arranged for public viewing. Illustrated American, April 4, 1891.
In Italy, public opinion clamored for justice and the vindication of Italy’s national honor.
The Prime Minister of Italy demanded punishment of the murderers, and the US refused. Italy’s Prime Minister ordered the Italian ambassador home from Washington.
Rumors now began to spread of Italian warships headed for the American coast. Confederate veterans from Tennessee and the Shelby Rifles of Texas volunteered to fight for Old Glory against Rome. Uniontown, Alabama, offered fifteen hundred men. From Georgia the War Department received an offer of “a company of unterrified Georgia rebels to invade Rome, disperse the Mafia and plant the Stars and Stripes on the dome of St. Peter’s.”
On May 5, a New Orleans grand jury convened to look into the murders. The grand jury’s report concluded some of the jurors had been subject to “a money influence to control their decision.” As a result, six men indicted for attempted bribery and one person convicted (serving a short sentence).
As for the lynch mob, the grand jury decided that it “embraced several thousand of the first, best and even the most law-abiding citizens of the city … in fact, the act seemed to involve the entire people of the parish and the City of New Orleans. …” And after thoroughly examining the subject the grand jury reported there was no reason to indict anybody for the lynching.
However, due to the diplomatic sparring with Italy, the Department of Justice looked into the incident. After reviewing the eight-hundred-page transcript, U.S. attorney, William Grant, reported that evidence against the defendants was “exceedingly unsatisfactory” and inconclusive. And later, all charges outstanding against those who had survived the prison massacre were dropped.
Public sentiment across the nation viewed that justice had triumphed — in the streets of New Orleans, if not in its courts. Few did disagree; The Nation magazine said we had “cut a sorry figure before the civilized world.” But New Orleans was content. “The hand of the assassin has been stayed,” the New Delta reported. “The Mafia is a thing of the past.”
President Harrison would have ignored the New Orleans carnage had the victims been black. The Italian government made that impossible. It broke off diplomatic relations and demanded an indemnity that the Harrison administration paid. Harrison’s 1891 State of the Union called congress to protect foreign nationals — though not black Americans — from mob violence.
To appease Italian-Americans, Harrison gave a Columbus Day proclamation in 1892. The myth of Columbus as “the first immigrant” still told today.
Phrased in a mockery of the Italian American dialect, “Who killa da chief?” remained a taunt used to insult Italian Americans in New Orleans well into the 1990s.
1872
·
Ludvig Nobel and Peter Bilderling build a factory for weapons production in Izhevsk in the Urals.
1873
·
Robert Nobel looks for walnut wood for Ludvig’s rifles in the forests of the Caucasus.
·
The monopoly-contract system for renting oil deposits is abolished in favor of public auction of contracts in the Caucasus.
1874
·
Colorless, tasteless, and almost odorless crystalline chemical compound, an organochloride. Originally developed as an insecticide, it became infamous for its environmental impacts. DDT was first synthesized in 1874 by the Austrian chemist Othmar…
DDT exposure in people
Exposure to DDT in people likely occurs from eating foods, including meat, fish, and dairy products. DDT exposure can occur by eating, breathing, or touching products contaminated with DDT. DDT can convert into DDE, and both persist in body and environment. In the body, DDT converts into several breakdown products called metabolites, including the metabolite dichlorodiphenyldichloroethene (DDE). The body’s fatty tissues store DDT and DDE. In pregnant women, DDT and DDE exposure can occur in the fetus. Both chemicals can be in breast milk, resulting in exposure to nursing infants.
How DDT Affects People’s Health
Human health effects from DDT at low environmental doses are unknown. Following exposure to high doses, human symptoms can include vomiting, tremors or shakiness, and seizures. Laboratory animal studies show DDT exposure can affect the liver and reproduction. DDT is a possible human carcinogen according to U.S. and International authorities.
Levels of DDT and DDE in the U.S. Population
CDC scientists measured DDT and its metabolite DDE in the serum (a clear part of blood) of 1,956 participants aged 12 years and older who took part in CDC’s National Health and Nutrition Examination Survey (NHANES) during 2003–2004. (National Report on Human Exposure to Environmental Chemicals and Updated Tables). By measuring DDT and DDE in the serum, scientists can estimate the amounts of these chemicals entering people’s bodies.
- A small portion of the population had measurable DDT. Most of the population had detectable DDE. DDE stays in the body longer than DDT, and DDE is an indicator of past exposure.
- Blood serum levels of DDT and DDE in the U.S. population appear to be five to ten times lower than levels found in smaller studies from the 1970s.
Finding measurable amounts of DDT and DDE in serum does not imply that the levels of these chemicals cause an adverse health effect. Biomonitoring studies of serum DDT and DDE provide physicians and public health officials with reference values. These reference values can determine whether higher levels of DDT and DDE exposure in people are present than in the general population. Biomonitoring data also help scientists plan and conduct research on exposure and health effects.
Consequences of DDT Exposure Could Last Generations
Scientists found health effects in grandchildren of women exposed to the pesticide
Hailed as a miracle in the 1950s, the potent bug killer DDT (dichloro-diphenyl-trichloroethane) promised freedom from malaria, typhus and other insect-borne diseases. Manufacturers promoted it as a “benefactor of all humanity” in advertisements that declared, “DDT Is Good for Me!” Americans sprayed more than 1.35 billion tons of the insecticide—nearly 7.5 pounds per person—on crops, lawns and pets and in their homes before biologist Rachel Carson and others sounded the alarm about its impacts on humans and wildlife. The fledgling U.S. Environmental Protection Agency banned DDT in 1972.
Friends and family often ask Barbara Cohn, an epidemiologist at Oakland’s Public Health Institute, why she studies the effects of the long-banned pesticide. Her answer: DDT continues to haunt human bodies. In earlier studies, she found that the daughters of mothers exposed to the highest DDT levels while pregnant had elevated rates of breast cancer, hypertension and obesity.
Cohn’s newest study, on the exposed women’s grandchildren, documents the first evidence that DDT’s health effects can persist for at least three generations. The study linked grandmothers’ higher DDT exposure rates to granddaughters’ higher body mass index (BMI) and earlier first menstruation, both of which can signal future health issues
“This study changes everything,” says Emory University reproductive epidemiologist Michele Marcus, who was not involved in the new research. “We don’t know if [other human-made, long-lasting] chemicals like PFAS will have multigenerational impacts—but this study makes it imperative that we look.” Only these long-term studies, Marcus says, can illuminate the full consequences of DDT and other biologically disruptive chemicals to help guide regulations.
In the late 1950s Jacob Yerushalmy, a biostatistician at the University of California, Berkeley, proposed an ambitious study to follow tens of thousands of pregnancies and measure how experiences during fetal development could affect health into adolescence and adulthood. The resulting Child Health and Development Study (CHDS) tracked more than 20,000 Bay Area pregnancies from 1959 to 1966. Yerushalmy’s group took blood samples throughout pregnancy, at delivery and from newborns while gathering detailed sociological, demographic and clinical data from mothers and their growing children.
Cohn took the helm of the CHDS in 1997 and began to use data from the children, then approaching middle age, to investigate potential environmental factors behind an increase in breast cancer. One possibility was exposure in the womb to a group of chemicals classified as endocrine disruptors—including DDT.
Human endocrine glands secrete hormones and other chemical messengers that regulate crucial functions, from growth and reproduction to hunger and body temperature. An endocrine-disrupting chemical (EDC) interferes with this finely tuned system. Many pharmaceuticals (such as the antibiotic triclosan and the antimiscarriage drug diethylstilbestrol) act as EDCs, as do industrial chemicals like bisphenol A and polychlorinated biphenyls, and insecticides like DDT. “These chemicals hack our molecular signals,” says Leonardo Trasande, director of the Center for the Investigation of Environmental Hazards at New York University, who was not involved in the study.
Thawing tens of thousands of CHDS samples from decades earlier, Cohn and her colleagues measured the DDT in each mother’s blood to determine the amount of fetal exposure. In a series of studies, they connected this level to the children’s midlife heart health and breast cancer rates.
Fetuses produce all their egg cells before birth, so Cohn suspected these children’s prenatal DDT exposure might also affect their own future children (the CHDS group’s grandchildren). With an average age of 26 this year, these grandchildren are young for breast cancer—but they might have other conditions known to increase risk of it striking later.
Using more than 200 mother-daughter-granddaughter triads, Cohn’s team found that the granddaughters of those in the top third of DDT exposure during pregnancy had 2.6 times the odds of developing an unhealthy BMI. They were also more than twice as likely to have started their periods before age 11. Both factors, Cohn says, are known to raise the risk of later developing breast cancer and cardiovascular disease. These results, published in Cancer Epidemiology, Biomarkers, and Prevention, mark the first human evidence that DDT’s health threats span three generations.
Akilah Shahib, 30, whose grandmother was in the CHDS study and who participated in the current work, says the results provide a stark reminder that current health problems may stem from long-ago exposures. “DDT was a chemical in the environment that my grandparents had no control over,” she says. “And it wasn’t the only one.
To Andrea Gore, a toxicologist at the University of Texas at Austin, the new results are nothing short of groundbreaking. “This is the first really robust study that shows these kinds of multigenerational outcomes,” says Gore, who was not involved in the study.
Laboratory studies, including one by Cohn in 2019, have shown that DDT and other EDCs can lead to effects across generations via epigenetic changes, which alter how genes turn on and off. Cohn is also investigating the multigenerational effects of other endocrine disruptors, including BPA and polyfluorinated compounds.
Such research also highlights the need for long-term testing to determine a chemical’s safety, N.Y.U.’s Trasande says. Gore agrees, arguing that regulators should require more rigorous testing for endocrine-disrupting effects; while scientists learn about the specific mechanisms by which EDCs influence health over multiple generations, she adds, they should routinely look for hallmarks of such influences in lab toxicology studies.
Videos from the late 40s show DDT being sprayed on children in a pool, to showcase how ‘safe’ it was for use in the home and around humans. Source: https://www.youtube.com/watch?v=kbcHszMCIJM&t=28s
As Trasande puts it: “This study reinforces the need to make sure that this doesn’t happen again.”
The first risk of DDT is because it concentrates in biological systems, particularly in body fat. This means that DDT, once it enters the body gets stored as fat, which leads it to be able to build up and become toxic.