Review: Volume 14 - The 17th Century

Review: Volume 14 - The 17th Century

How Great Britain was born - from the restoration to the Great Exhibition. In 1660 England emerged from the devastations of the Civil Wars and restored the king, Charles II, to the throne. Over the next 190 years Britain would establish itself as the leading nation in the world - the centre of burgeoning Empire, at the forefront of the Enlightenment and the Industrial Revolution. However, radical change also brought with it anxiety and violence. America is lost in the War of Independence and calls for revolution at home are never far from the surface of everyday life. In this scintillating overview of the era in which Britain changed the world, and how that nation was transformed as a result. William Gibson also looks at the impact of this transformation had upon the ordinary men and women. This the is the third book in the four volume Brief History of Britain which brings together some of the leading historians to tell our nation's story from the Norman Conquest of 1066 to the present-day. Combining the latest research with accessible and entertaining story telling, it is the ideal introduction for students and general readers.

To our generation fell the good fortune of re-discovering the Levellers. To the classical liberal historian they meant rather less than nothing, this neglect is puzzling. At the crisis of the English Revolution

it was from the Levellers and not from its commanders that the victorious New Model Army derived its political ideas and its democratic drive. Even on a superficial glance the Levellers leaders are as personalities unusual and, indeed, unique. King Charles had Lilburne flogged as a youngster from

Ludgate Hill to Palace Yard; Cromwell banished him in middle age to a dungeon in Jersey. But what we have rediscovered is not merely the fact that the Levellers anticipated our fathers in most of the social and political reforms of the next 300 years; the theme of this book is rather that they were, until Cromwell crushed them, the dynamic pioneers, who had the initiative during the most formative years of the Inter-regnum. They would have won for our peasants in the mid-17th century what the Great Revolution gained for those of France at the close of the 18th.

The author, who has been exploring battlefields for 50 years, describes the celebrated battles - including Edgehill, Marston Moor and Naseby - together with the major political events which characterised one of the most turbulent periods in English history. Contemporary documents and leading secondary sources have been used to produce a picture of those troubled times - years which revolutionised government and witnessed the first faltering steps of Parliamentarians towards the democratic form of government known today.

In 1683, two empires – the Ottoman, based in Constantinople, and the Habsburg dynasty in Vienna – came face to face in the culmination of a 250-year power struggle: the Great Siege of Vienna. Within the city walls the choice of resistance over surrender to the largest army ever assembled by the Turks created an all-or-nothing scenario: every last survivor would be enslaved or ruthlessly slaughtered. Both sides remained resolute, sustained by hatred of their age-old enemy, certain that their victory would be won by the grace of God. Eastern invaders had always threatened the West, but the memory of the Turks, to whom the West’s ancient and deep fear of the East is viscerally attached, remains vivid and powerful. Long before their 1453 conquest of Constantinople, the Turks had raised the art of war to heights not seen since the Roman Empire. Although their best recorded and most infamous attack, the 1683 siege was the historical culmination, not the extent, of the Turks’ sustained attempt to march westwards and finally obtain the city they had long called ‘The Golden Apple’. Their defeat was to mark the beginning of the decline of the Ottoman Empire.

Scholars have long argued over whether the 1648 Peace of Westphalia, which ended more than a century of religious conflict arising from the Protestant Reformations, inaugurated the modern sovereign-state system. But they largely ignore a more fundamental question: why did the emergence of new forms of religious heterodoxy during the Reformations spark such violent upheaval and nearly topple the old political order? In this book, Daniel Nexon demonstrates that the answer lies in understanding how the mobilization of transnational religious movements intersects with--and can destabilize--imperial forms of rule. Taking a fresh look at the pivotal events of the sixteenth and seventeenth centuries--including the Schmalkaldic War, the Dutch Revolt, and the Thirty Years' War--Nexon argues that early modern "composite" political communities had more in common with empires than with modern states, and introduces a theory of imperial dynamics that explains how religious movements altered Europe's balance of power. He shows how the Reformations gave rise to crosscutting religious networks that undermined the ability of early modern European rulers to divide and contain local resistance to their authority. In doing so, the Reformations produced a series of crises in the European order and crippled the Habsburg bid for hegemony.

The crucial part played by intelligence and espionage techniques - by spying - in Britain during the Civil Wars and the Commonwealth has rarely been studied, yet it is a key to understanding the dangerous politics and the open warfare of those troubled times. In this fascinating and original account, Julian Whitehead traces the rapid development of intelligence techniques during this, one of the most confused and uncertain phases of British history. His vivid narrative demonstrates how leaders on all sides set up increasingly effective systems for gathering and interpreting intelligence, and it shows the decisive impact intelligence had on events. The intrigue, the secret operations, the many plots and counter-plots, and the colourful personalities involved, make compelling reading.

William Cavendish embodies the popular image of a cavalier. He was both courageous and cultured. His passions were architecture, horses and women. And, along with the whole courtly world of King Charles I and his Cavaliers, he was doomed to failure. Cavendish was a master of manège (the art of teaching horses to dance) and obsessed with building beautiful houses in the latest style. He taught Charles I’s son to ride, and was the general of the king’s army in the North during the Civil War. Famously defeated at the Battle of Marston Moor in 1644, he went into a long continental exile before returning to England in triumph on the restoration of King Charles II to the throne in 1660. This is the story of one remarkable man, but it is also a rich evocation of the extraordinary organism which sustained him - his household. Lucy Worsley shows us the complex and fascinating hierarchies among the inhabitants of the great houses of the seventeenth-century, painting a picture of conspiracy, sexual intrigue, clandestine marriage and gossip.

The Union of the Crowns of England and Scotland in 1603 dramatically changed the nature and level of interaction between the constituent parts of the British Isles, and over the course of the century that followed the seismic shocks of constitutional revolutions and civil wars were felt in each one of three very different kingdoms that had been forced together under one king. The chapters in this volume, each written by a leading scholar of the period, analyse in turn the response to the Union of 1603, the religious controversies under the early Stuarts, the Civil War, Commonwealth, and Restoration periods, and the social and economic context within which these developments took place. The final chapter then looks at the vibrant cultural interaction between the kingdoms of the British Isles in the seventeenth century, which stands in sharp contrast to the political, religious, and social doubts and fears that permeated the period.Throughout, the book maintains a careful balance, focusing on the ways in which the various tensions within each individual kingdom came together, whilst at the same time looking beyond the confines of any one of the kingdoms and recognizing their interlinking 'British' impact.

A fresh approach to the English civil war, War in England 1642-1649 focuses on answering a misleadingly simple question: what kind of war was it to live through? Eschewing descriptions of specific battles or analyses of political and religious developments, Barbara Donagan examines the 'texture' of war, addressing questions such as: what did Englishmen and women believe about war and know about its practice before 1642? What were the conditions in which a soldier fought - for example, how efficient was his musket (not very), and how did he know where he was going (much depended on the reliability of scouts and spies)? What were the rules that were supposed to govern conduct in war, and how were they enforced (by a combination of professional peer pressure and severe but discretionary army discipline and courts martial)? What were the officers and men of the armies like, and how well did they fight? The book deals even-handedly with royalists and parliamentarians, examining how much they had in common, as well as discussing the points on which they differed.

In April 1649 two 'Diggers', Gerrard Winstanley and William Everard, appeared in Whitehall before the leader of the army, Lord Fairfax, to declare that deliverance was at hand. "God is bringing his people out of slavery. Freedom is restored." This book surveys radical thinking arising from the Reformation, and writings that pressed for the rights to land. It includes important selected texts: Digger Manifestoes; The Appeal to the House of Commons; The Watchword to the City of London; and Winstanley's 'Utopia'. Lewis H. Berens was a Quaker whose study of Winstanley was first published in 1906.

The Cambridge History of Seventeenth-Century Philosophy

This book has been cited by the following publications. This list is generated based on data provided by CrossRef.
  • Publisher: Cambridge University Press
  • Online publication date: March 2008
  • Print publication year: 2000
  • Online ISBN: 9781139055451
  • DOI:
  • Subjects: History of Philosophy, Area Studies, European History after 1450, History, European Studies, Early Modern Philosophy, Philosophy
  • Collection: Cambridge Histories - Philosophy & Political Thought

Email your librarian or administrator to recommend adding this book to your organisation's collection.

Book description

The Cambridge History of Seventeenth-Century Philosophy offers a uniquely comprehensive and authoritative overview of early-modern philosophy written by an international team of specialists. As with previous Cambridge Histories of Philosophy the subject is treated by topic and theme, and since history does not come packaged in neat bundles, the subject is also treated with great temporal flexibility, incorporating frequent reference to medieval and Renaissance ideas. The basic structure of the volumes corresponds to the way an educated seventeenth-century European might have organised the domain of philosophy. Thus, the history of science, religious doctrine, and politics feature very prominently.


‘The Cambridge History of Seventeenth-Century Philosophy is an outstanding collection that will be a standard reference work for some time to come and which will be an important point of call for anyone wanting to chart their way through the philosophy of one of the most exciting and significant centuries of all.’

Refine List

Information Overload’s 2,300-Year-Old History

We’re all worried about the costs of information overload and we typically associate these problems with new digital technologies. But actually information overload has very deep roots: signs of information overload were present already in the accumulation of manuscript texts in pre-modern cultures and were further accelerated by the introduction of printing (in the 15th century in the case of Europe).

In the Western tradition, complaints about the abundance of books surface in antiquity (in Ecclesiastes 12:12 or Seneca in the 1st century CE). In 1255 the Dominican Vincent of Beauvais articulated eloquently the key ingredients of the feeling of overload which are still with us today: “the multitude of books, the shortness of time and the slipperiness of memory.” Vincent’s solution was to write a massive book (of some 4.5 million words) in which he gathered the “flowers” or best bits of all the books he was able to read, to spare others the costs (in time, effort and access to books) of doing so themselves.

In the 15th century, printing rapidly multiplied the number of books available and lowered their cost. The experience of overload, which had been limited to a privileged elite, became more widespread as more of the educated could buy more books than they could read or remember. They coped by taking notes to record the best passages from their reading and by relying on aids of various kinds — including people they hired to take notes for them, or printed reference works or, starting in the late 17th century, periodicals which circulated excerpts and book reviews.

Overload is not something that just happened to Europeans in the Renaissance as if they stumbled across ancient texts long forgotten in medieval libraries, or a new continent in America, or a new technology like printing. Overload was born from a drive to accumulate and save which became particularly visible in the Renaissance as individuals and institutions collected copies of ancient texts, exotic natural specimens and artifacts, forming the kernel of libraries and museums that have sometimes endured to the present.

Even well before the Renaissance, for every complaint about overload there were also enthusiasts for accumulation: at the Library of Alexandria (3rd century BCE), or in 1st-century Rome, with Pliny the Elder and his Natural history. Renaissance humanists were especially eager to save their writings and findings given their keen awareness of the tragic loss of ancient learning despite their best efforts to retrieve Conrad Gesner (1516-65), for example, hailed printing and the growth of libraries as a guard against similar losses in the future.

But saving on a large scale required managing what was saved and providing tools for retrieval, hence the multiplication of finding tools, like alphabetical indexes of various kinds, detailed tables of contents, library and sales catalogs, and bibliographies. Many of these tools had been first developed in the 13th century by scholastics, but printing helped to diffuse them in more different contexts and to a broader range of readers and they have been with us ever since. For example, Diderot and d’Alembert presented their 17-volume alphabetically arranged Encyclopédie in the mid-18th century as the grounds from which civilization could be rebuilt if necessary after a catastrophe, but in fact we have not yet suffered from a massive loss of learning of the kind they envisioned. Of course much has been lost in the process of transmission, some by accident, but also, more systematically, the voices of those marginalized on account of race, class, gender or geography. Nonetheless continued accumulation has been the dominant information regime since the middle ages.

The overload we experience today — millions of Google search results in a fraction of a second — has its costs, but it is also a privilege, the result of the efforts of generations of accumulation before us and of massively increased access to the consumption and production of information in the digital age. Yes, overload creates problems, but it has also inspired important solutions — methods of selecting, summarizing, sorting and storing first devised centuries ago and that still work today, as well as new ones no doubt forthcoming.

Ann Blair is Henry Charles Lea Professor of History at Harvard. She specializes in the cultural and intellectual history of early modern Europe, and writes and teaches on the history of the book, the history of education and of scholarship, and the interactions between science and religion. She is the author of Too Much To Know: Managing Scholarly Information Before the Modern Age (Yale UP 2010).

History of Neuropsychological Assessment

This chapter presents a historical overview of observations, instruments, and approaches in the area of neuropsychological assessment. In the 17th and 18th century literature dealing especially with language disorders following a brain disorder, one finds observations of physicians of striking dissociations of mental faculties that were impaired while others remained intact. Around the middle of the 19th century, neuropsychiatrists like Carl Wernicke began to develop procedures for assessing more specific components of mental functioning. German physicians, Conrad Rieger and Theodor Ziehen, seem to have developed the first neuropsychological test batteries. Kurt Goldstein, inspired by the rising Gestalt theory, argued that not the test score but the strategy used by a patient to perform a task is important. Alexander Luria also promoted an approach to assessment that was mainly based on subjective judgment. Studies on individual differences led to the development of an intelligence test battery by Alfred Binet. This battery was later transformed into the Army Alpha and Army Beta tests for selecting soldiers. Components of these intelligence tests have survived in the test kit of the modern neuropsychologist. This tradition also stimulated the development of psychometric analysis of tests. Two pioneers in the field of neuropsychological assessment were Shepherd Ivory Franz, favoring a clinical approach, and Ward Halstead, stimulating a strongly psychometric-based approach. The evaluation of language disorders has always been a specific area, requiring its own set of tests. The first comprehensive language battery was compiled by Bastian. Around the middle of the 20th century, when the localization of function approach had been rejected, neurologists preferred to examine language disorders clinically, using a battery that evaluated speech, comprehension, reading, and writing.

44/2018- The archaeological medieval textile fragments from Swedish cities

Scroll down for English
På Historiska museet i Stockholm pågår just nu ett arbete med att förbättra och dela information om de ca 1000 medeltida arkeologiska textilfragment som museet förvaltar. Det innebär att nya bilder fotograferas och fragmenten kommer att analyseras och kategoriseras i visuella grupper, som gör att de blir lättare att prata om olika textilkvalitéer. Bild och analys kommer sedan att föras in i museets databas och bli sökbart. Det kommer dock dröja ytterligare innan det är publicerat eftersom museet byter föremålsförvaltningssystem. Men vi kan inte hålla oss utan kommer börja lägga upp fina nyfotograferade fynd här på vår blogg.

Fyndmaterialet som nu gås igenom kommer från arkeologiska utgrävningar som gjorts framförallt på 1970-talet i olika svenska städernas stadskärnor. En del av samlingen som finns på museet kommer från Enköping, från kvarteret Traktören, och den första textilien som vi visar idag kommer därifrån.

Det arkeologiska medeltida textilfragmenten från våra städer kan delas in i fem grupper
1- borttappade föremål, ex en vante
2 -spill från tillverkning
3 -fragment av utslitna klädesplagg
4 -rester av grövre emballage textil
5 -trådar, filtad ull eller lös ull

Av de ca 1000 fragmenten som finns på museet tillhör en mycket liten del grupp ett.

Fragmentet på bilden är en 2/1 kypert, ull. Fragmentet är mycket slitet så det kommer troligen från ett utslitet klädesplagg eller en inrednings textil som exempelvis en dyna. Idag är textilen ljusbrun och ingen synlig färg framträder. Men om man tittar uppe till höger i bilden syns släppta trådar. Detta indikerar att det här har varit en vävd rand som bundit i tuskaft/ inslagsrips. Dateringen är “medeltid”. Vi placerar denna väv till 1300-talet då den har det karakteristiska ränderna som är vanliga för tiden.
/ Amica och Maria

Foto Ola Myrin, SHM

At the Historical Museum in Stockholm, work is right now being done to improve and share information about the approximately 1000 medieval archaeological textiles that the museum manages. This means that new photos are being taken and the fragments will be analyzed and categorized into visual groups, that will make it easier to talk about different textile qualities. Photo and analyzed result will then be uploaded into the museum’s database and become searchable. However, it will take some time before it is published because the museum is changing the system for the database. But we can’t wait and will start posting nice photos of the finds here on our blog.

The material currently under review comes from archeological excavations made mainly in the 1970s in the city centers of different cities in Sweden. A part of the collection that is found at the museum comes from Enköping, from the quarter of Traktören, and the first textile presented today comes from there.

The archaeological medieval textile fragments from our cities can be divided into five groups
1 – lost items, for an exempel, a mitten
2 – left overs from manufacture
3 -fragments from worn out garments
4 – residue of coarser packaging textile
5 – threads, felted wool or loose wool

Of the approximately 1,000 fragments found in the museum a very small part belongs to group 1.

The fragment is a 2/1 twill, wool. The fragment is very worn, so it probably comes from a worn out garment or an interior textile, like as a cushion. Today the textile is light brown and no visible color is shown. But if you look at the top right in the picture you see some loose threads. This indicates that this textile have had a woven stripe, woven in plain weave/ extended tabby. The dating is “medieval”. We place this fabric in the 14th century as it has the characteristic stripes that are commonplace at the time.

The fabric is what we would describe as an medium quality suitable for an example kirtles and hoods.
/ Amica and Maria

Slave Traffic from Africa: 1451�

  1. 1451�: beginning (1/4 million)
  2. 1601�: growing (1.3 million)
  3. 1701�: peaking (6 million)
  4. 1811�: declining (2 million)

The best estimates suggest that between 1451 and 1870 when the transatlantic slave trade ended, over 9 million people were transported from Africa to be enslaved in the New World (McCaa 1997).

On the African coast, West Central Africa was an even more important source of people for the New World slave markets than recent literature credits. For every region outside Angola there was a pattern of a marked rise in slave departures that occurred in sequence, followed by a plateau of departures that continued until a rather sudden end to the traffic. However for Angola, the pattern was different. After the swing away from export of Africans from Angola, a return to exporting people from Angola occurred.

In the Americas, sugar was the driving force in the slave trade, though gold and silver were important in the earliest phase of the traffic. Coffee would later assumed the role of sugar in the final phase. American cotton would not develop as an export until after the United States abolished slave trade.

Perhaps the most important conclusion of recent history on the subject of transatlantic links is that “the picture of African coerced migrants arriving mainly in a mix of peoples—often on the same vessel—needs revising.” Like the free migrant and indentured servant trades, systematic geographic patterns existed. Eltis suggests, “ Scholars should now turn to exploring what these mean both for Africa and for African influences in the shaping of the New World… (Eltis et al 2001).”

Women in warfare and the military in the early modern era

Active warfare throughout history has mainly been a matter for men, but women have also played a role, often a leading one. While women rulers conducting warfare was common, women who participated in active warfare was rare. The following list of prominent women in war and their exploits from about 1500 AD up to about 1750 AD suggests the wider involvement of numerous unnamed women, some of them thrust into positions of leadership by accident of birth or family connection, others from humble origin by force of personality and circumstance.

Only women active in direct warfare, such as warriors, spies, and women who actively led armies are included in this list.

The history of bloodletting

With a history spanning at least 3000 years, bloodletting has only recently—in the late 19th century—been discredited as a treatment for most ailments.

With a history spanning at least 3000 years, bloodletting has only recently—in the late 19th century—been discredited as a treatment for most ailments.

The practice of bloodletting began around 3000 years ago with the Egyptians, then continued with the Greeks and Romans, the Arabs and Asians, then spread through Europe during the Middle Ages and the Renaissance. It reached its peak in Europe in the 19th century but subsequently declined and today in Western medicine is used only for a few select conditions.

Humors, Hippocrates, and Galen
To appreciate the rationale for bloodletting one must first understand the paradigm of disease 2300 years ago in the time of Hippocrates (

460–370 BC). He believed that existence was represented by the four basic elements—earth, air, fire, and water—which in humans were related to the four basic humors: blood, phlegm, black bile, yellow bile.

Each humor was centred in a particular organ—brain, lung, spleen, and gall bladder—and related to a particular personality type—sanguine, phlegmatic, melancholic, and choleric.[1]

Being ill meant having an imbalance of the four humors. Therefore treatment consisted of removing an amount of the excessive humor by various means such as bloodletting, purging, catharsis, diuresis, and so on. By the 1st century bloodletting was already a common treatment, but when Galen of Pergamum (129–200 AD) declared blood as the most dominant humor, the practice of venesection gained even greater importance.[2]

Galen was able to propagate his ideas through the force of personality and the power of the pen his total written output exceeds two million words. He had an extraordinary effect on medical practice and his teaching persisted for many centuries. His ideas and writings were disseminated by several physicians in the Middle Ages when bloodletting became accepted as the standard treatment for many conditions.

Methods of bloodletting
Bloodletting was divided into a generalized method done by venesection and arteriotomy, and a localized method done by scarification with cupping and leeches. Venesection was the most common procedure and usually involved the median cubital vein at the elbow, but many different veins could be used. The main instruments for this technique were called lancets and fleams.[3]

Thumb lancets were small sharp-pointed, two-edged instruments often with an ivory or tortoise shell case that the physician could carry in his pocket. Fleams were usually devices with multiple, variably sized blades that folded into a case like a pocketknife.

Localized bloodletting often in­volv­ed scarification, which meant scraping the skin with a cube-shaped brass box containing multiple small knives, followed by cupping, which involved placing a dome-shaped glass over the skin and extracting the air by suction or prior heating.[4]

Leeches used for bloodletting usually involved the medicinal leech, Hirudo medicinalis. At each feeding a leech can ingest about 5 to 10 ml of blood, almost 10 times its own weight. The use of leeches was greatly in­fluenced by Dr François Broussais (1772–1838), a Parisian physician who claimed that all fevers were due to specific organ inflammation. He was a great proponent of leech therapy along with aggressive bloodletting. He believed in placing leeches over the organ of the body that was deemed to be inflamed.[5]

his therapy was very popular in Europe in the 1830s, especially France, where 5 to 6 million leeches per year were used in Paris alone and about 35 million in the country as a whole. By the late 1800s, however, enthusiasm for leech therapy had waned, but leeches are still used today in select situations.

Famous bleedings
When Charles II (1630–1685) suffered a seizure he was immediately treated with 16 ounces of bloodletting from the left arm followed by another 8 ounces from cupping.[6] Then he endured a vigorous regimen of emetics, enemas, purgatives, and mustard plasters followed by more bleeding from the jugular veins. He had more seizures and received further treatment with herbs and quinine. In total he had about 24 ounces of blood taken before he died.

After riding in snowy weather, George Washington (1732–1799) developed a fever and respiratory distress. Under the care of his three physicians he had copious amounts of blood drawn, blisterings, emetics, and laxatives. He died the next night of what has been diagnosed retrospectively as epiglottitis and shock.[6] His medical treatment aroused significant controversy, particularly the bloodletting.

Warring physicians
The practice of bloodletting aroused deep emotions in both practitioners and detractors, with intense argument about the benefit and harm of venesection. Drs Benjamin Rush, William Alison, and Hughes Bennett exemplify this conflict.

Dr Benjamin Rush (1745–1813) was one of the most controversial phy­sicians in his time. He was arrogant and paternalistic but dedicated to eradicating illness wherever he saw it. He worked tirelessly during the yellow fever epidemics in Philadelphia in 1793 and 1797 and devoted much time to the problem of mental illness.[7]

Unfortunately he had a very simplistic view of disease and thought that all febrile illnesses were due to an “irregular convulsive action of the blood vessels.” Therefore in his mind all therapy was directed at dampening down this vascular overexcitement. He was a great proponent of “depletion therapy,” which meant aggressive bloodletting and vigorous purging.

He was known to remove extraordinary amounts of blood and often bled patients several times. “It frequently strangles a fever… imparts strength to the body… renders the pulse more frequent when it is preternaturally slow… renders the bowels, when costive, more easily moved by purging physic… removes or lessens pain in every part of the body, and more especially the head… removes or lessens the burning heat of the skin, and the burning heat of the stomach…”[8]

In addition he held a firm belief in his calomel purgatives, which were loaded with mercury and which he called “the Samson of medicine.” In numerous articles he boldly proclaimed the benefits of his therapy.

He aroused both extremely positive and negative reactions in those around him, including many physicians. Some doctors referred to his practices as “murderous” and his prescribed doses as “fit for a horse.” He had a long-running feud with his college of physicians, which forced him to resign, and his application to the faculty of Columbia Medical School in New York was denied. However, Rush Medical College in Chicago was named in his honor and gained its charter in 1837.

At the Edinburgh School of Medicine Dr William Alison (1790–1859) and Dr Hughes Bennett (1812–1875) were a study in contrasts. The former was a dignified old-timer and strong believer in bloodletting, while the latter was an arrogant newcomer and resolute debunker of bloodletting. Whereas Dr Alison followed the old tradition of clinical experience and empirical observation, Dr Bennett believed in the new methods of pathology and physiology supported by the microscope and the stethoscope.[9]

Central to their debate was the ob­servation that the improved outcome of patients with pneumonia paralleled the decreased usage of bloodletting. While Dr Alison ascrib­ed this to a “change in type” of illness which had gone from sthenic (strong) to asthenic (weak), Dr Bennett be­lieved it due to diminished use of a dangerous therapy.

Both were implacable in their point of view, thereby underlining the significant gap between their beliefs in empirical observation versus scientific verification. Dr Bennett had the ad­vantage of the latest techniques and “grounded his rejection of bloodletting on pathologic concepts of inflammation and pneumonia derived from microscopic studies of inflamed tissues.”[9]

The tide turns
In Paris Dr Pierre Louis (1787–1872) was another scientific-minded physician who wanted to assess the efficacy of bloodletting. He examined the clinical course and outcomes of 77 patients with acute pneumonia taken from his own and hospital records.

He compared the results in patients treated with bloodletting in the early phase versus the late phase of the illness. In his conclusions he did not condemn bloodletting but concluded that the ef­fect of this procedure “was actually much less than has been commonly be­lieved.”[10]

Subsequent studies by Pasteur, Koch, Virchow, and others confirmed the validity of the new scientific methods, and the use of bloodletting gradually diminished to a few select conditions.

Bloodletting today
Today phlebotomy therapy is primarily used in Western medicine for a few conditions such as hemochromatosis, polycythemia vera, and porphyria cutanea tarda.[11]

Hemochromatosis is a genetic disorder of iron metabolism leading to abnormal iron accumulation in liver, pancreas, heart, pituitary, joints, and skin. It is treated with periodic phlebotomy to maintain ferritin levels at a reasonable level so as to minimize further iron deposition.

Polycythemia vera is a stem cell bone marrow disorder leading to overproduction of red blood cells and variable overproduction of white blood cells and platelets. Its treatment includes phlebotomy to reduce the red blood cell mass and decrease the chance of dangerous clots.

Porphyria cutanea tarda is a group of disorders of heme metabolism with an associated abnormality in iron metabolism. Phlebotomy is also used to decrease iron levels and prevent accumulation in various organs.

In the last 25 years leech therapy has made a comeback in the area of microsurgery and reimplantation sur­gery. Hirudo medicinalis can secrete several biologically active substances including hyaluronidase, fibrinase, proteinase inhibitors, and hirudin, an anticoagulant.

The leech can help reduce venous congestion and prevent tissue necrosis. In this way it can be used in the postoperative care of skin grafts and reimplanted fingers, ears, and toes. Because of concern regarding second­ary infections a “mechanical leech” has been developed at the University of Wisconsin.[12]

Why did it persist?
We may wonder why the practice of bloodletting persisted for so long, especially when discoveries by Vesalius and Harvey in the 16th and 17th centuries exposed the significant errors of Galenic anatomy and physiology. However as Kerridge and Lowe have stated, “that bloodletting survived for so long is not an intellectual anomaly—it resulted from the dynamic interaction of social, economic, and intellectual pressures, a process that continues to determine medical practice.”[9]

With our present understanding of pathophysiology we might be tempted to laugh at such methods of therapy. But what will physicians think of our current medical practice 100 years from now? They may be astonished at our overuse of antibiotics, our ten­dency to polypharmacy, and the blunt­ness of treatments like radiation and chemo­therapy.

In the future we can anticipate that with further advances in medical knowledge our diagnoses will become more refined and our treatments less invasive. We can hope that medical research will proceed unhampered by commercial pressures and unfettered by political ideology. And if we truly believe that we can move closer to the pure goal of scientific truth.

The History of Plague – Part 1. The Three Great Pandemics

December 17th, 2012 --> By John Frith In History Issue Volume 20 No. 2 .

Plague is an acute infectious disease caused by the bacillus Yersinia pestis and is still endemic in indigenous rodent populations of South and North America, Africa and Central Asia. In epidemics plague is transmitted to humans by the bite of the Oriental or Indian rat flea and the human flea. The primary hosts of the fleas are the black urban rat and the brown sewer rat. Plague is also transmissible person to person when in its pneumonic form. Yersinia pestis is a very pathogenic organism to both humans and animals and before antibiotics had a very high mortality rate. Bubonic plague also has military significance and is listed by the Centers for Disease Control and Prevention as a Category A bioterrorism agent.1

There have been three great world pandemics of plague recorded, in 541, 1347, and 1894 CE, each time causing devastating mortality of people and animals across nations and continents. On more than one occasion plague irrevocably changed the social and economic fabric of society.

In most human plague epidemics, infection initially took the form of large purulent abscesses of lymph nodes, the bubo (L. bubo = ‘groin’, Gr. boubon = ‘swelling in the groin’), this was bubonic plague. When bacteraemia followed, it caused haemorrhaging and necrosis of the skin rapidly followed by septicaemic shock and death, septicaemic plague. If the disease spread to the lung through the blood, it caused an invariably fatal pneumonia, pneumonic plague, and in that form plague was directly transmissible from person to person.

The three great plague pandemics had different geographic origins and paths of spread. The Justinian Plague of 541 started in central Africa and spread to Egypt and the Mediterranean. The Black Death of 1347 originated in Asia and spread to the Crimea then Europe and Russia. The third pandemic, that of 1894, originated in Yunnan, China, and spread to Hong Kong and India, then to the rest of the world.2

The causative organism, Yersinia pestis, was not discovered until the 1894 pandemic and was discovered in Hong Kong by a French Pastorien bacteriologist, Alexandre Yersin. Four years later in 1898 his successor, Paul-Louis Simond, a fellow Asia and migrated westward on the sea routes from China and India. The brown rat flourished in Europe where there were open sewers and ample breeding grounds and food and in the 18th and 19th centuries replaced the black rat as the main disease host.4, 9

The primary vectors for transmission of the disease from rats to humans were the Oriental or Indian rat flea, Xenopsylla cheopis, and the Northern or European rat flea, Nosopsyllus fasciatus. The human flea, Pulex irritans, and the dog and cat fleas, Ctenocephalides canis and felis, were secondary vectors. In the pandemics, the infected fleas were able to spread the plague over long distances as they were carried by rats and by humans travelling along trade routes at sea and overland, and also by infesting rice and wheat grain, clothing, and trade merchandise. When infected, the proventriculus of the flea becomes blocked by a mass of bacteria. The flea continues to feed, biting with increasing frequency and agitation, and in an attempt to relieve the obstruction the flea regurgitates the accumulated blood together with a mass of Yersinia pestis bacilli directly into the bloodstream of the host. The fleas multiply prolifically on their host and when the host dies they leave immediately, infesting new hosts and thus creating the foundations for an epidemic.10, 11

The Justinian Plague of 541-544

The first great pandemic of bubonic plague where people were recorded as suffering from the characteristic buboes and septicaemia was the Justinian Plague of 541 CE, named after Justinian I, the Roman emperor of the Byzantine Empire at the time. The epidemic originated in Ethiopia in Africa and spread to Pelusium in Egypt in 540. It then spread west to Alexandria and east to Gaza, Jerusalem and Antioch, then was carried on ships on the sea trading routes to both sides of the Mediterranean, arriving in Constantinople (now Istanbul) in the autumn of 541.12, 13

The Byzantine court historian, Procopius of Caesarea, in his work History of the Wars, described people with fever, delirium and buboes He wrote that the epidemic was one ‘by which the whole human race came near to be annihilated’. Procopius wrote of the symptoms of the disease :

“ … with the majority it came about that they were seized by the disease without becoming aware of what was coming either through a waking vision or a dream. … They had a sudden fever, some when just roused from sleep, others while walking about, and others while otherwise engaged, without any regard to what they were doing. And the body showed no change from its previous colour, nor was it hot as might be expected when attacked by a fever, nor indeed did any inflammation set in, but the fever was of such a languid sort from its commencement and up till evening that neither to the sick themselves nor to a physician who touched them would it afford any suspicion of danger. It was natural, therefore, that not one of those who had contracted the disease expected to die from it. But on the same day in some cases, in others on the following day, and in the rest not many days later, a bubonic swelling developed and this took place not only in the particular part of the body which is called boubon, that is, “below the abdomen,” but also inside the armpit, and in some cases also beside the ears, and at different points on the thighs.”14

The focus of the Justinian pandemic was Constantinople, reaching a peak in the spring of 542 with 5,000 deaths per day in the city, although some estimates vary to 10,000 per day, and it went on to kill over a third of the city’s population. Victims were too numerous to be buried and were stacked high in the city’s churches and city wall towers, their Christian doctrine preventing their disposal by cremation. Over the next three years plague raged through Italy, southern France, the Rhine valley and Iberia. The disease spread as far north as Denmark and west to Ireland, then further to Africa, the Middle East and Asia Minor. Between the years 542 and 546 epidemics in Asia, Africa and Europe killed nearly 100 million people.15, 16

The pandemic had a drastic effect and permanently changed the social fabric of the Western world. It contributed to the demise of Justinian’s reign. Food production was severely disrupted and an eight year famine followed. The agrarian system of the empire was restructured to eventually become the three field feudal system. The social and economic disruption caused by the pandemic marked the end of Roman rule and led to the birth of culturally distinctive societal groups that later formed the nations of medieval Europe.12

Further major outbreaks occurred throughout Europe and the Middle East over the next 200 years – in Constantinople in the years 573, 600, 698 and 747, in Iraq, Egypt and Syria in the years 669, 683, 698, 713, 732 and 750 and Mesopotamia in 686 and 704. In 664 plague laid waste to Ireland, and in England it came to be known as the Plague of Cadwaladyr’s Time, after a Welsh king who contracted plague but survived it in 682. The plague continued in intermittent cycles in Europe into the mid-8th century and did not re-emerge as a major epidemic until the 14th century.

The ‘Black Death’ of Europe in 1347 to 1352

The Black Death of 1347 was the first major European outbreak of the second great plague pandemic that occurred over the 14th to 18th centuries. In 1346 it was known in the European seaports that a plague epidemic was present in the East. In 1347 the plague was brought to the Crimea from Asia Minor by the Tartar armies of Khan Janibeg, who had laid siege to the town of Kaffa (now Feodosya in Ukraine), a Genoese trading town on the shores of the Black Sea. The siege of the Tartars was unsuccessful and before they left, from a description by Gabriel de Mussis from Piacenza, in revenge they catapulted over the walls of Kaffa corpses of people who had died from the Black Death. In panic the Genoese traders fled in galleys with ‘sickness clinging to their bones’ to Constantinople and across the Mediterranean to Messina, Sicily, where the great pandemic of Europe started. By 1348 it had reached Marseille, Paris and Germany, then Spain, England and Norway in 1349, and eastern Europe in 1350. The Tartars left Kaffa and carried the plague away with them spreading it further to Russia and India.17

A description of symptoms of the plague was given by Giovanni Boccaccio in 1348 in his book Decameron, a set of tales of a group of Florentines who secluded themselves in the country to escape the plague :

“.. in men and women alike it first betrayed itself by the emergence of certain tumours in the groin or armpits. Some of which grew as large as a common apple, others as an egg, some more, some less, which the common folk called gavocciolo. From the two said parts of the body this deadly gavocciolo soon began to propagate and spread itself in all directions indifferently after which the form of the malady began to change, black spots or livid making their appearance in many cases on the arm or the thigh or elsewhere. “17

The term “Black Death” was not used until much later in history and in 1347 was simply known as “the pestilence” or “pestilentia”, and there are various explanations of the origin of the term. Butler [11] states the term refers to the haemorrhagic purpura and ischaemic gangrene of the limbs that sometimes ensued from the septicaemia. Ziegler17 states it derives from the translation of the Latin pestis atra or atra mors, ‘atra’ meaning ‘terrible’ or ‘dreadful’, the connotation of which was ‘black’, and ‘mors’ meaning ‘death’, and so ‘atra mors’ was translated as meaning ‘black death’.

The social impacts of the Black Death in Europe during the 14th century

The overall mortality rate varied from city to city, but in places such as Florence as observed by Boccaccio up to half the population died, the Italians calling the epidemic the mortalega grande, ‘the great mortality’. [18] People died with such rapidity that proper burial or cremation could not occur, corpses were thrown into large pits and putrefying bodies lay in their homes and in the streets. People were as much afraid they would suffer a spiritual death as they were a physical death since there were no clergy to perform burial rites:

“Shrift there was none churches and chapels were open, but neither priest nor penitents entered – all went to the charnelhouse. The sexton and the physician were cast into the same deep and wide grave the testator and his heirs and executors were hurled from the same cart into the same hole together.”18

Transmission of the disease was thought to be by miasmas, disease carrying vapours emanating from corpses and putrescent matter or from the breath of an infected or sick person. Others thought the Black Death was punishment from God for their sins and immoral behaviour, or was due to astrological and natural phenomena such as earthquakes, comets, and conjunctions of the planets. People turned to patron saints such as St Roch and St Sebastian or to the Virgin Mary, or joined processions of flagellants whipping themselves with nail embedded scourges and incanting hymns and prayers as they passed from town to town.17, 19, 20

“When the flagellants – they were also called cross brethren and cross bearers – entered a town, a borough or a village in a procession their entry was accompanied by the pealing of bells, singing, and a huge crowd of people. As they always marched two abreast, the procession of the numerous penitents reached farther than the eye could see.” [20]

The only remedies were inhalation of aromatic vapours from flowers and herbs such as rose, theriaca, aloe, thyme and camphor. Soon there was a shortage of doctors which led to a proliferation of quacks selling useless cures and amulets and other adornments that claimed to offer magical protection.

In this second pandemic, plague again caused great social and economic upheaval. Often whole families were wiped out and villages abandoned. Crops could not be harvested, travelling and trade became curtailed, and food and manufactured goods became short. As there was a shortage of labour, surviving villager labourers, the ‘villeins’, extorted exorbitant wages from the remaining aristocratic landowners. The villeins prospered and acquired land and property. The plague broke down the normal divisions between the upper and lower classes and led to the emergence of a new middle class.17, 9 The plague lead to a preoccupation with death as evident from macabre artworks such as the ‘Triumph of Death’ by Pieter Breughel the Elder in 1562, which depicted in a panoramic landscape armies of skeletons killing people of all social orders from peasants to kings and cardinals in a variety of macabre and cruel ways.

In the period 1347 to 1350 the Black Death killed a quarter of the population in Europe, over 25 million people, and another 25 million in Asia and Africa.[15] Mortality was even higher in cities such as Florence, Venice and Paris where more than half succumbed to the plague. A second major epidemic occurred in 1361, the pestis secunda, in which 10 to 20% of Europe’s population died.13 Other virulent infectious disease epidemics with high mortalities occurred during this time such as smallpox, infantile diarrhoea and dysentery. By 1430, Europe’s population was lower than it had been in 1290 and would not recover the pre-pandemic level until the 16th century.13, 21

In 1374 when another epidemic of the Black Death re-emerged in Europe, Venice instituted various public health controls such as isolating victims from healthy people and preventing ships with disease from landing at port. In 1377 the republic of Ragusa on the Adriatic Sea (now Dubrovnik in Yugoslavia) established a ships’ landing station far from the city and harbour in which travellers suspected to have the plague had to spend thirty days, the trentena, to see whether they became ill and died or whether they remained healthy and could leave. The trentena was found to be too short and in 1403 in Venice, travellers from the Levant in the eastern Mediterranean were isolated in a hospital for forty days, the quarantena or quaranta giorni, from which we derive the term quarantine.8,18 This change to forty days may have also been related to other biblical and historical references such as the Christian observance of Lent, the period for which Christ fasted in the desert, or the ancient Greek doctrine of “critical days” which held that contagious disease will develop within 40 days after exposure.22 In the 14th and 15th centuries following, most countries in Europe had established quarantine, and in the 18th century Habsburg established a cordon sanitaire, a line between infected and clean parts of the continent which ran from the Danube to the Balkans. It was manned by local peasants with checkpoints and quarantine stations to prevent infected people from crossing from eastern to western Europe.8

The leather costume of the plague doctors at Nijmegen

In the 15th and 16th centuries doctors wore a peculiar costume to protect themselves from the plague when they attended infected patients, first illustrated in a drawing by Paulus Furst in 1656 and later Jean-Jacques described a similar costume worn by the plague doctors at Nijmegen, an old Dutch town in Gelderland, in his 1721 work Treatise on the Plague. They wore a protective garb head to foot with leather or oil cloth robes, leggings, gloves and hood, a wide brimmed hat which denoted their medical profession, and a beak like mask with glass eyes and two breathing nostrils which was filled with aromatic herbs and flowers to ward off the miasmas. They avoided contact with patients by taking their pulse with a stick or issued prescriptions for aromatic herb inhalations passing them through the door, and buboes were lanced with knives several feet long.19

The Great Plague of London of 1665 to 1666

Plague continued to occur in small epidemics throughout the world but a major outbreak of the pneumonic plague occurred in Europe and England in 1665 to 1666. The epidemic was described by Samuel Pepys in his diaries in 1665 and by Daniel Defoe in 1722 in his A Journal of a Plague Year. People were incarcerated in their homes, doors painted with a cross. The epidemic reached a peak in September 1665 when 7,000 people per week were dying in London alone. Between 1665 and 1666 a fifth of London’s population died, some 100,000 people.[17] The Great Fire of London in 1666 and the subsequent rebuilding of timber and thatch houses with brick and tile disturbed the rats’ normal habitat and led to a reduction in their numbers, and may have been a contributing factor to the end of the epidemic.9

An old English nursery rhyme published in Kate Greenaway’s book Mother Goose 1881 reminds us of the symptoms of the plague :

‘Ring, a-ring, o’rosies, (a red blistery rash)

A pocket full of posies (fragrant herbs and flowers to ward off the ‘miasmas’)

Atishoo, atishoo (the sneeze and the cough heralding pneumonia)

We all fall down.’ (all dead)

Plague waxed and waned in Europe until the late 18th century, but not with the virulence and mortality of the 14th century European Black Death.

The Third Pandemic of 1894

The plague re-emerged from its wild rodent reservoir in the remote Chinese province of Yunnan in 1855. From there the disease advanced along the tin and opium routes and reached the provincial capital of K’unming in 1866, the Gulf of Tonkin in 1867, and the Kwangtung province port of Pakhoi (now Pei-hai) in 1882. In 1894 it had reached Canton and then spread to Hong Kong. It had spread to Bombay by 1896 and by 1900 had reached ports on every continent, carried by infected rats travelling the international trade routes on the new steamships.3,23 It was in Hong Kong in 1894 that Alexandre Yersin discovered the bacillus now known as Yersinia pestis, and in Karachi in 1898 that Paul-Louis Simond discovered the brown rat was the primary host and the rat flea the vector of the disease.3, 4, 24, 25

In 1900 the plague came to Australia where the first major outbreak occurred in Sydney, its epicentre at the Darling Harbour wharves, spreading to the city, Surry Hills, Glebe, Leichhardt, Redfern, and The Rocks, and causing 100 deaths. John Ashburton- Thompson, the chief medical officer, recorded the epidemic and confirmed that rats were the source and their fleas were the vectors in the epidemic. There were 12 major outbreaks of plague in Australia from 1900 to 1925 with 1371 cases and 535 deaths, most cases occurring in Sydney.26

The third pandemic waxed and waned throughout the world for the next five decades and did not end until 1959, in that time plague had caused over 15 million deaths, the majority of which were in India. There have been outbreaks of plague since, such as in China and Tanzania in 1983, Zaire in 1992, and India, Mozambique and Zimbabwe in 199415, 27. In Madagascar in the mid-1990’s, a multi-drug resistant strain of the bacillus was identified15, 28. Currently around 2,000 cases occur annually, mostly in Africa, Asia and South America, with a global case fatality rate of 5% to 15%.28

Bubonic plague is a virulent disease with a significant mortality rate, transmitted primarily by the bite of the rat flea or through person-to-person when in its pneumonic form. There have been innumerable epidemics of plague throughout history, but it was the pandemics of the 6th, 14th and 20th centuries that have had the most impact on human society, not only in terms of the great mortalities, but also the social, economic and cultural consequences that resulted. The course of development of communities and nations was altered several times. Much has changed to prevent the recurrence of pandemic plague, such as the development of the germ theory and the science of bacteriology, public health measures such as quarantine, and antibiotics such as streptomycin, but plague today is still an important and potentially serious threat to the health of people and animals.

3. Plague and the decline of Italy: hypotheses and research agenda

In seventeenth-century Italy, plague caused a demographic catastrophe that took many decades to recover. The long-lasting decline in population had demographic, and more specifically epidemic, reasons and was neither the consequence of economic difficulties nor of the malgoverno (bad government) of foreign dominators. Statements such as Helleiner's, “[Even without the plagues] the secular stagnation of the Italian economy in the period under review would probably have militated against demographic expansion,” betray the conviction that demographic decline was a consequence of economic decline ( Helleiner 1967, p. 50).

The new data discussed here, combined with the most recent reconstruction of demographic trends in the century preceding the epidemics ( Alfani 2013), suggest to reconsider this statement. Plague was the main cause of demographic decline in seventeenth-century Italy. More generally, by comparing the demographic trends of different areas of Western Europe (table 6) with plague incidence (table 2), there is clearly a strong inverse relation. Mortality, and not only economic or commercial growth, is a key factor explaining the changing demographic weight of different parts of the continent.

Watch the video: Why you should NEVER Buy an Orange County Chopper