Cora, a 41-year-old lawyer and mother of two, has recently been experiencing severe headaches, a high fever, and a stiff neck. Her husband, who has accompanied Cora to see a doctor, reports that Cora also seems confused at times and unusually drowsy. Based on these symptoms, the doctor suspects that Cora may have meningitis, a potentially life-threatening infection of the tissue that surrounds the brain and spinal cord.
Meningitis has several potential causes. It can be brought on by bacteria, fungi, viruses, or even a reaction to medication or exposure to heavy metals. Although people with viral meningitis usually heal on their own, bacterial and fungal meningitis are quite serious and require treatment.
Cora’s doctor orders a lumbar puncture (spinal tap) to take three samples of cerebrospinal fluid (CSF) from around the spinal cord ([link]). The samples will be sent to laboratories in three different departments for testing: clinical chemistry, microbiology, and hematology. The samples will first be visually examined to determine whether the CSF is abnormally colored or cloudy; then the CSF will be examined under a microscope to see if it contains a normal number of red and white blood cells and to check for any abnormal cell types. In the microbiology lab, the specimen will be centrifuged to concentrate any cells in a sediment; this sediment will be smeared on a slide and stained with a Gram stain. Gram staining is a procedure used to differentiate between two different types of bacteria (gram-positive and gram-negative).
About 80% of patients with bacterial meningitis will show bacteria in their CSF with a Gram stain.1 Cora’s Gram stain did not show any bacteria, but her doctor decides to prescribe her antibiotics just in case. Part of the CSF sample will be cultured—put in special dishes to see if bacteria or fungi will grow. It takes some time for most microorganisms to reproduce in sufficient quantities to be detected and analyzed.
Jump to the next Clinical Focus box.
Most people today, even those who know very little about microbiology, are familiar with the concept of microbes, or “germs,” and their role in human health. Schoolchildren learn about bacteria, viruses, and other microorganisms, and many even view specimens under a microscope. But a few hundred years ago, before the invention of the microscope, the existence of many types of microbes was impossible to prove. By definition, microorganisms, or microbes, are very small organisms; many types of microbes are too small to see without a microscope, although some parasites and fungi are visible to the naked eye.
Humans have been living with—and using—microorganisms for much longer than they have been able to see them. Historical evidence suggests that humans have had some notion of microbial life since prehistoric times and have used that knowledge to develop foods as well as prevent and treat disease. In this section, we will explore some of the historical applications of microbiology as well as the early beginnings of microbiology as a science.
People across the world have enjoyed fermented foods and beverages like beer, wine, bread, yogurt, cheese, and pickled vegetables for all of recorded history. Discoveries from several archeological sites suggest that even prehistoric people took advantage of fermentation to preserve and enhance the taste of food. Archaeologists studying pottery jars from a Neolithic village in China found that people were making a fermented beverage from rice, honey, and fruit as early as 7000 BC.2
Production of these foods and beverages requires microbial fermentation, a process that uses bacteria, mold, or yeast to convert sugars (carbohydrates) to alcohol, gases, and organic acids ([link]). While it is likely that people first learned about fermentation by accident—perhaps by drinking old milk that had curdled or old grape juice that had fermented—they later learned to harness the power of fermentation to make products like bread, cheese, and wine.
Prehistoric humans had a very limited understanding of the causes of disease, and various cultures developed different beliefs and explanations. While many believed that illness was punishment for angering the gods or was simply the result of fate, archaeological evidence suggests that prehistoric people attempted to treat illnesses and infections. One example of this is Ötzi the Iceman, a 5300-year-old mummy found frozen in the ice of the Ötzal Alps on the Austrian-Italian border in 1991. Because Ötzi was so well preserved by the ice, researchers discovered that he was infected with the eggs of the parasite Trichuris trichiura, which may have caused him to have abdominal pain and anemia. Researchers also found evidence of Borrelia burgdorferi, a bacterium that causes Lyme disease.3 Some researchers think Ötzi may have been trying to treat his infections with the woody fruit of the Piptoporus betulinus fungus, which was discovered tied to his belongings.4 This fungus has both laxative and antibiotic properties. Ötzi was also covered in tattoos that were made by cutting incisions into his skin, filling them with herbs, and then burning the herbs.5 There is speculation that this may have been another attempt to treat his health ailments.
Several ancient civilizations appear to have had some understanding that disease could be transmitted by things they could not see. This is especially evident in historical attempts to contain the spread of disease. For example, the Bible refers to the practice of quarantining people with leprosy and other diseases, suggesting that people understood that diseases could be communicable. Ironically, while leprosy is communicable, it is also a disease that progresses slowly. This means that people were likely quarantined after they had already spread the disease to others.
The ancient Greeks attributed disease to bad air, mal’aria, which they called “miasmatic odors.” They developed hygiene practices that built on this idea. The Romans also believed in the miasma hypothesis and created a complex sanitation infrastructure to deal with sewage. In Rome, they built aqueducts, which brought fresh water into the city, and a giant sewer, the Cloaca Maxima, which carried waste away and into the river Tiber ([link]). Some researchers believe that this infrastructure helped protect the Romans from epidemics of waterborne illnesses.
Even before the invention of the microscope, some doctors, philosophers, and scientists made great strides in understanding the invisible forces—what we now know as microbes—that can cause infection, disease, and death.
The Greek physician Hippocrates (460–370 BC) is considered the “father of Western medicine” ([link]). Unlike many of his ancestors and contemporaries, he dismissed the idea that disease was caused by supernatural forces. Instead, he posited that diseases had natural causes from within patients or their environments. Hippocrates and his heirs are believed to have written the Hippocratic Corpus, a collection of texts that make up some of the oldest surviving medical books.6 Hippocrates is also often credited as the author of the Hippocratic Oath, taken by new physicians to pledge their dedication to diagnosing and treating patients without causing harm.
While Hippocrates is considered the father of Western medicine, the Greek philosopher and historian Thucydides (460–395 BC) is considered the father of scientific history because he advocated for evidence-based analysis of cause-and-effect reasoning ([link]). Among his most important contributions are his observations regarding the Athenian plague that killed one-third of the population of Athens between 430 and 410 BC. Having survived the epidemic himself, Thucydides made the important observation that survivors did not get re-infected with the disease, even when taking care of actively sick people.7 This observation shows an early understanding of the concept of immunity.
Marcus Terentius Varro (116–27 BC) was a prolific Roman writer who was one of the first people to propose the concept that things we cannot see (what we now call microorganisms) can cause disease ([link]). In Res Rusticae (On Farming), published in 36 BC, he said that “precautions must also be taken in neighborhood swamps
While the ancients may have suspected the existence of invisible “minute creatures,” it wasn’t until the invention of the microscope that their existence was definitively confirmed. While it is unclear who exactly invented the microscope, a Dutch cloth merchant named Antonie van Leeuwenhoek (1632–1723) was the first to develop a lens powerful enough to view microbes. In 1675, using a simple but powerful microscope, Leeuwenhoek was able to observe single-celled organisms, which he described as “animalcules” or “wee little beasties,” swimming in a drop of rain water. From his drawings of these little organisms, we now know he was looking at bacteria and protists. (We will explore Leeuwenhoek’s contributions to microscopy further in How We See the Invisible World.)
Nearly 200 years after van Leeuwenhoek got his first glimpse of microbes, the “Golden Age of Microbiology” spawned a host of new discoveries between 1857 and 1914. Two famous microbiologists, Louis Pasteur and Robert Koch, were especially active in advancing our understanding of the unseen world of microbes ([link]). Pasteur, a French chemist, showed that individual microbial strains had unique properties and demonstrated that fermentation is caused by microorganisms. He also invented pasteurization, a process used to kill microorganisms responsible for spoilage, and developed vaccines for the treatment of diseases, including rabies, in animals and humans. Koch, a German physician, was the first to demonstrate the connection between a single, isolated microbe and a known human disease. For example, he discovered the bacteria that cause anthrax (Bacillus anthracis), cholera (Vibrio cholera), and tuberculosis (Mycobacterium tuberculosis).9 We will discuss these famous microbiologists, and others, in later chapters.
As microbiology has developed, it has allowed the broader discipline of biology to grow and flourish in previously unimagined ways. Much of what we know about human cells comes from our understanding of microbes, and many of the tools we use today to study cells and their genetics derive from work with microbes.
Because individual microbes are generally too small to be seen with the naked eye, the science of microbiology is dependent on technology that can artificially enhance the capacity of our natural senses of perception. Early microbiologists like Pasteur and Koch had fewer tools at their disposal than are found in modern laboratories, making their discoveries and innovations that much more impressive. Later chapters of this text will explore many applications of technology in depth, but for now, here is a brief overview of some of the fundamental tools of the microbiology lab.
Which of the following foods is NOT made by fermentation?
Who is considered the “father of Western medicine”?
Who was the first to observe “animalcules” under the microscope?
Who proposed that swamps might harbor tiny, disease-causing animals too small to see?
Thucydides is known as the father of _______________.
Researchers think that Ötzi the Iceman may have been infected with _____ disease.
The process by which microbes turn grape juice into wine is called _______________.
What did Thucydides learn by observing the Athenian plague?
Why was the invention of the microscope important for microbiology?
What are some ways people use microbes?
Explain how the discovery of fermented foods likely benefited our ancestors.
What evidence would you use to support this statement: Ancient people thought that disease was transmitted by things they could not see.
This page was adapted from the textbook made available by OpenStax College under a Creative Commons License 4.0 International. Download for free at http://cnx.org/content/col11448/latest/