A case of ulcerative colitis, a form of inflammatory bowel disease. Photo via Wikimedia Commons. Credit: Samir.
A two-hit punch in the gut might explain why some people find themselves alone among their closest relatives in having inflammatory bowel disease (IBD). The double gut punches come in the form of a compromised intestinal wall coupled with a poorly behaved immune system, say Emory researchers, whose work using mice was published in the journal Immunity. IBDs include ulcerative colitis and Crohn’s disease, the latter of which is slightly more common in women.
An inflamed gut is the key feature of IBD, which affects about 600,000 people in the United States each year. Typical symptoms include bloody diarrhea, fever, and cramps, which can come and go with bouts of severe inflammation punctuating relatively calm periods. The going explanation for these disorders is a wonky immune system, but some breach of the barrier that keeps your gut contents in their place is also implicated. Researchers also have identified a link between bouts of gastroenteritis–known around my house as “throw-up” illnesses–and development of IBD. What’s remained unclear is how people who have these so-called “leaky guts” don’t develop a disease like Crohn’s when a close family member with a leaky gut does.
These hints in humans led the Emory investigators to examine the interaction of a compromised gut and the immune system in mice. The mice in the study had ‘leaky’ gut walls because they lacked a protein that usually ties cells together into water-tight sheets. Without these proteins sealing up the intestinal lining, bacteria and other components can make their way their deeper into the intestinal wall, triggering chronic inflammation.
The thing is, these mice with their leaky guts don’t develop colitis spontaneously, a situation, the investigators hypothesized, that reflects families full of people with leaky guts but rarely IBD. Permeable intestines alone aren’t enough. Some other dysfunction related to the immune system, they figured, must pile onto that leakiness and bring on the inflammatory disorder.
If you’re an immunologist–which I am not–an obvious choice for investigation is a class of immune cells called T cells. These cells come in a dizzying array of types, but one way to narrow them down relies on a protein that some but not all of them make. Pulling out the T cells that make this protein, says Timothy Denning, PhD, a mucosal immunologist at Emory and study author, is “the simplest way” to start examining the immune system involvement because these cells play a ton of roles in balancing different immune responses. So, they first collected the T cells carrying this protein from the mouse intestines.
“There are good and bad” versions of T cells carrying these identifier molecules, though, says Denning, so the next step was to find the “good” ones that might be protecting mice in spite of their sieve-like intestinal linings. To achieve that goal required some fancier lab moves. “We stimulated the cells and looked at the cytokines (immune signaling molecules) they make,” explains Charles Parkos, MD, PhD, an experimental pathologist and mucosal immunologist at Emory and also a paper author. “We found that the cells in the mice that were better protected predominantly secreted TGF-beta, a prototypic marker for ‘good’ cells.”
One of the things T cells do with TGF-beta is to talk to B cells, another class of immune cell. B cells take responsibility for remembering what’s attacked you in the past and marshaling forces if it attacks again. Also, when B cells are stimulated, explains Parkos, one way they respond is to release proteins–antibodies–that target the offending invaders. In the gut, the kind of antibody the B cells make in response to the TGF-beta message is immunoglobulin A, or IgA. This antibody “keeps bacteria in check,” says Denning, and also probably “broadly neutralizes lots of different microorganisms” in the intestines, adds Parkos.
The Emory-based team found that when the leaky-gut mice also had an IgA deficiency, they became more open to the types of immune cells that cause gut inflammation. The animals also were far more susceptible to colitis triggered by a chemical treatment in the lab and had much worse disease. Without the IgA, the mice couldn’t dampen inflammation triggered by bacteria slipping through the intestinal breaches. The results of this two-step physiological fail, in mice, at least: severe inflammatory gut disease.
Denning cautions that these results in mice don’t suggest a rush to TGF-beta or IgA treatment for inflammatory diseases. “TGF-beta has many effects and on many different cell types, and too much is not a good thing because it’s known to play a role in fibrosis and cancer,” says Denning. “If your child had IBD, the last thing you’d want to do is to give TGF-beta.” Much more work has to be done, he adds, for a better understanding of the implications of these results before anyone starts talking about therapies. Parkos agrees. “To our knowledge, administration of TGF-beta is not a viable therapy.”
The same applies for IgA, Denning says. “We couldn’t just take any old B cells and get them to make IgA and put it in and hope that it would do something,” he says. The reason, he explains, is because B cells make many different types of IgA molecules specific to foreign invaders they encounter, a process that happens on the spot, not in a lab dish. “We need to understand much more about the basic mechanisms, but we do believe that these pathways would be critical to induce in people who are more susceptible to IBD, such as first-degree relatives.”
Some research groups are conducting trials to treat IBDs with helminth worms–intestinal parasites–on the hypothesis that their presence would induce a balance in the immune system and tamp down an overactive inflammatory response. The balance in this case is supposed to be between two competing aspects of the immune system, called Th1 and Th2. But one issue in these intestinal inflammatory disorders, says Denning, is that Crohn’s is linked to Th1 hyperactivity while ulcerative colitis is associated with Th2.
Yet the worms appear to show some beneficial effects in both disorders, in spite of the different involvement of Th1 and Th2. The TGF-beta signaling effect on IgA that the Emory group identified operates by a third component, tentatively identified as Th3. Both Denning and Parkos are intrigued by the possibility that the presence of helminths might trigger this pathway, rather than influencing Th1 or Th2, explaining why worm treatment has sometimes proved useful for both Crohn’s and ulcerative colitis.
As for why IBD arises, the researchers hope their findings answer some questions. “There are different camps in the IBD community,” says Parkos. “Some say immune system, some say barrier, others say genetics or environment.” What they have with their results, he says, is evidence showing that a leak alone is not enough and that a wonky immune system alone is not enough. But the double-whammy of a leaky gut and an absence of immune protection “dramatically increase susceptibility to disease, and that helps explain why diseases are so complicated,” he says.
The use of parasitic worms for these inflammatory diseases arose from the concept of the hygiene hypothesis, the idea that we’re too clean in the modern developed world, leading to an immune imbalance that can include chronic inflammation and autoimmune disorders. Asked about any links between the hygiene hypothesis and this pathway to IBD they identified in mice, Denning says, “It’s not obviously all about the parasites. That’s just one key thing–it’s probably an exposure to a lot of different types of things in your gut and airways.” He describes the immune system as being a thermostat that registers a specific set-point early on based on these exposures. This set-point, he says, is lower in people who grow up in developed countries like the United States and leads to a “trigger-happy immune system that is ready to fire much more easily.”
That doesn’t mean that a worm infection or just being dirty will prevent your developing IBD. That said, these immunologists both have the same general advice for parents regarding their children. “Being too clean is not a good thing,” they agree. As immunologists, he adds, “We feel exactly the opposite. Go play in the dirt.”
The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.
The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.
Big Molecules with Small Building Blocks
The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.
We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.
You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.
When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.
Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.
The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.
Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.
On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.
The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!
If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.
The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?
If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.
In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.
Sugar and Fuel
A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.
Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.
Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.
Polysaccharides: Fuel and Form
Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.
Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.
Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.
Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.
The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.
Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.
The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.
That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.
These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.
Lipids: The Fatty Trifecta
Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.
Fats: the Good, the Bad, the Neutral
Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?
Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows. Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.
Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.
Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.
Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.
The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.
You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.
In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.
A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.
Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.
Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.
Phospholipids: An Abundant Fat
You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.
Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.
There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.
Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.
The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.
Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.
Steroids: Here to Pump You Up?
Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.
But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.
Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.
Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.
As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.
Levels of Structure
Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.
For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.
This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.
Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.
The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.
In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.
A Plethora of Purposes
What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.
As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.
How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.
Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.
DNA vs. RNA: A Matter of Structure
DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.
So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.
RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.
DNA vs. RNA: Function Wars
An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.
These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.
RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.
An African-American woman and scientist in Tanzania
by Danielle Lee, Ph.D.
Actual field diary entry, Tuesday, August 7, 2012, ~8:30 am
I cried this morning. In the shower. I was trying (poorly) to suppress screams of pain as I let the water run on my leg. I knew it was going to be bad when I saw blood on my pants as I pulled my field cover pants off.
I had been running into the same bush on line 3 between traps C and D every day. It has scraped me good, but this time it really hurt. I fell down and screamed in pain. Shabani* was breaking across the field to come to see about me. It really was of no use. He couldn’t help me. I just needed a minute or two to recover. I soon walked it off.
But in that moment standing in the shower, I let out a yelp. I tried to hold my scraped leg under the hot water to clean the wound. But it stung like the devil.
I cried, much like I did as little girl. I looked down at my scarred legs and immediately recollected the summer of 1981. I was such a tomboy, playing in the yard and in the streets with my boy cousins, and clumsy, so clumsy. My legs and arms were covered in bandages like tattoos. This obviously frustrated my bio-dad to no end. I vividly remember him threatening to spank me if I got another scratch on my legs. He scolded that I was a girl and I had no business being scarred up like that. I was shook. I tried to play more girl-like and carefully, but it was of no use. I liked climbing trees and tussling too much to stop. So I resorted to hiding my scars from him.
But as I looked at my abused legs, I couldn’t help but think how ‘unlady-like’ my stems were. What boy is going like me with my legs looking like that or with feet badly needing a pedicure?
And it makes me wonder, Do male researchers ever have conversations like these with themselves or with each other? Unlike our male counterparts, female researchers or long-term travelers and hikers have a few extra hygiene and grooming regimes to consider.
What about that time of the month?
How do I keep a happy lady garden in the middle of the bush?
Do I or don’t I shave my legs?
First, the monthly menses
Dealing with the logistics of menses is always an inconvenience to me. My previous field research experiences taught me that tampons are the most wonderful invention, ever! They are especially handy if you have a heavier flow or will be busy for long periods of time in the field. Wear a pad for backup so as not to soil and permanently stain your good field pants. However, the last time I was in a developing nation doing field ecology, I said if I could halt my period for the time I was there, then I would certainly do it. I was able to do just that: I had a hysterectomy 3 years ago(yay for me), but am sorry I can’t offer more first-hand advice for managing menses to younger researchers. [Ed. note: Some birth control pill formulations promise to limit bleeding to four times a year.]
I did make note that you can get feminine products in Tanzania, primarily sanitary napkins. They come in small-quantity packages, and there are not many varieties. It’s like the options that were available in 1980. If you have a preference for certain products, e.g., dry-weave, wings, long/short, light/super-absorbent variety pack, then I recommend bringing your own.
Second, maintaining your lady-garden
Since said hysterectomy, I have become more sensitive to microflora imbalance. It didn’t occur to me to be prepared for yeast infections. I eventually asked one of the other U.S female researchers if she had suffered from yeast infections since living in Tanzania. She told me that she had but that she was prepared. Her doctor pre-prescribed vaginal antifungals to take with her on her trip. I was not warned, but it occurred to me that if I was drinking bottled water because of the risks of local water disrupting my GI tract, then maybe my private parts would be sensitive as well. I began washing my sensitive areas with bottled water and noticed an immediate improvement. I also began taking acidophilus pills to get my system on track.
But let’s not underestimate the importance of your clothes and underwear-cleaning regime. You may discover you are allergic to the washing powder sold and used there. If that is the case, then I recommend hand-washing your unmentionables with bar soap and hanging them up to air dry in a dust-free location.
No matter what may be the root of your sensitivity (including old-fashioned stress), I highly recommend all female researchers to include vaginitis treatment ointments/creams and/or pills in your sundry first-aid and medicines kits. It is as important as packing anti-itch creams and anti-diarrheal pills in my book.
Finally, to shave or not to shave?
I have vanity issues, I admit. I (unnecessarily) obsess over grooming. Should I continue to shave my legs and underarms? Hirsute woman problems. The truth is, this grooming obsession didn’t matter. There was no need to obsessively attend to whiskers above my lip or on my chin or get a pedicure or even (welp) wear deodorant. Taking cues from local women, I noticed no one shaved their legs or under their arms – at least not as obsessively as we do in the West. And everyone had dusty feet! LOL, seriously red dust was everywhere and people wore sandals or flip flops or went barefoot for many occasions and traveled long distances, too!
A week before my departure, I ran out of my solid, invisible unscented deodorant. I thought that I would just go to the drug store and buy more. Nope! I visited 4 stores and could only find was liquid roll-on anti-perspirant that was flowery smelling. It did nothing for me, but it didn’t really matter. It wasn’t uncommon to smell a ‘day’s worth of work’ on people throughout the day, but I felt uncomfortable.
These were my obsessions. However, doing these are things made me feel human, like a woman even. But it was a relief to relax my obsessive pre-occupation with my body for a while.
The take-home message is if there are some things that help you make it through the day or season, then bring them, e.g., tweezers, hand mirror, a pumice stone, or certain brand of hygiene product. Yes, most basic products are available at the local dukas or apothecaries, and prices are fair. However, the varieties are limited. There is no better way to be reminded of how first-world you are than when you ask for a single-use, fancy-pants, non-essential comfort-only products in a developing nation.
*Shabani – Shabani Lutea was my field research assistant. He works for Sokoine University of Agriculture and is experienced trapping and handling wild African Pouched rats (without gloves!) He is the most awesome field research assistant there ever was, forever branded as the Rat Whisperer, because he really is that good.
About the author
DNLee is a post-doctoral researcher at Oklahoma State University. She is currently studying African-Pouched Rats, Cricetomys gambianus, an interesting yet largely mysterious animal that uses its keen sense of smell to detect landmines. She spent summer 2012 in Morogoro, Tanzania, studying the animals in the wild and in captivity. This is DNLee’s second installment in her series for Double X Science about her field experiences in Tanzania.
But today, I’m writing about those of us who have at least two X chromosomes. You may know that usually, carrying around a complete extra chromosome can lead to developmental differences, health problems, or even fetal or infant death. How is it that women can walk around with two X chromosomes in each body cell–and the X is a huge chromosome–yet men get by just fine with only one? What are we dealing with here: a half a dose of X (for men) or a double dose of X (for women)?
The answer? Women are typically the ones engaging in what’s known as “dosage compensation.” To manage our double dose of X, each of our cells shuts down one of the two X chromosomes it carries. The result is that we express the genes on only one of our X chromosomes in a given cell. This random expression of one X chromosome in each cell makes each woman a lovely mosaic of genetic expression (although not true genetic mosaicism), varying from cell to cell in whether we use genes from X chromosome 1 or from X chromosome 2.
Because these gene forms can differ between the two X chromosomes, we are simply less uniform in what our X chromosome genes do than are men. An exception is men who are XXY, who also shut down one of those X chromosomes in each body cell; women who are XXXshut down two X chromosomes in each cell. The body is deadly serious about this dosage compensation thing and will tolerate no Xtra dissent.
If we kept the entire X chromosome active, that would be a lot of Xtra gene dosage. The X chromosome contains about 1100 genes, and in humans, about 300 diseases and disorders are linked to genes on this chromosome, including hemophilia and Duchenne muscular dystrophy. Because males get only one chromosome, these X-linked diseases are more frequent among males–if the X chromosome they get has a gene form that confers disease, males have no backup X chromosome to make up for the deficit. Women do and far more rarely have X-linked diseases like hemophilia or X-linked differences like color blindness, although they may be subtly symptomatic depending on how frequently a “bad” version of the gene is silenced relative to the “good” version.
The most common example of the results of the random-ish gene silencing XX mammals do is the calico or tortoiseshell cat. You may have heard that if a cat’s calico, it’s female. That’s because the cat owes its splotchy coloring to having two X chromosome genes for coat color, which come in a couple of versions. One version of the gene results in brown coloring while the other produces orange. If a cat carries both forms, one on each X, wherever the cells shut down the brown X, the cat is orange. Wherever cells shut down the orange X, the cat is brown. The result? The cat can haz calico.
Cells “shut down” the X by slathering it with a kind of chemical tag that makes its gene sequences inaccessible. This version of genetic Liquid Paper means that the cellular machinery responsible for using the gene sequences can’t detect them. The inactivated chromosome even has a special name: It’s called a Barr body. The XXer who developed a hypothesis to explain how XX/XY mammals compensate for gene dosage is Mary Lyon, and the process of silencing an X by condensing it is fittingly called lyonization. Her hypothesis, based on observations of coat color in mice, became a law–the Lyon Law–in 2011.
Yet the silencing of that single chromosome in each XX cell isn’t total. As it turns out, women don’t shut down the second X chromosome entirely. The molecular Liquid Paper leaves clusters of sequences available, as many as 300 genes in some women. That means that women are walking around with full double doses of some X chromosome genes. In addition, no two women silence or express precisely the same sequences on the “silenced” X chromosome.
What’s equally fascinating is that many of the genes that go unsilenced on a Barr body are very like some genes on the Y chromosome, and the X and Y chromosomes share a common chromosomal ancestor. Thus, the availability of these genes on an otherwise silenced X chromosome may ensure that men and women have the same Y chromosome-related gene dosage, with men getting theirs from an X and a Y and women from having two X chromosomes with Y-like genes.
Not all genes expressed on the (mostly) silenced X are Y chromosome cross-dressers, however. The fact is, women are more complex than men, genomically speaking. Every individual woman may express a suite of X-related genes that differs from that of the woman next to her and that differs even more from that of the man across the room. Just one more thing to add to that sense of mystery and complexity that makes us so very, very double X-ey.
[ETA: Some phrases in this post may have appeared previously in similar form in Biology Digest, but copyright for all material belongs to EJW.]
Jeanne, would you like some…peeeaaasss? License information here.
I was seven weeks deep when it hit me. Suddenly, I was in a chronic state of queasiness. Under most circumstances, I had it under control. Sure, I would gag every time I brushed my teeth, but (mostly) I could keep it all down. Then I went to my aunt Diane’s house for dinner.
Aunt Diane rolls with a crowd of self-made Italian chefs and, as a result, most of her cooking falls under the “rustic Italian” umbrella. It is not uncommon to see sitting in her cupboard a massive inventory of jarred plum tomatoes or for an entire section of her freezer to be dedicated to homemade vodka sauce, always frozen in those takeaway containers that originally brought us egg drop soup. Under normal circumstances, I’d be psyched to eat over.
I don’t recall the entire menu, but there is one side dish that has been forever burned into memory, and not in a good way. I remember starring at my plate, specifically at the heaping pile of sautéed peas. I kept rearranging the peas on my plate, sometimes spreading them out, sometimes piling them up. Then Diane looked at me and excitedly asked, “Jeanne, did you try my peas? I made them just for you!” I don’t know what compelled her to make these peas for me. Perhaps it was because I am a vegetarian and the rest of the meal involved meat? But, there they were, staring me down, and there Diane was, watching with anticipation, waiting for my approval.
Because I adore my aunt Diane and I wanted to make her happy (after all, she did just cook an entire meal for my small family), I scooped up a moderate amount of peas with my fork and deposited them in my mouth. I had to use every fiber of my being to chew them, and even more effort to actually swallow. My body was not cooperating and I had to implement a state of near meditation to keep them from coming back up. Luckily, I kept my cool and was able coerce my face into showing a smile while simultaneously telling my aunt and friend that her peas were delicious.
Credit: Jeanne Garbarino.
My husband picked up on my soaring level of discomfort and without missing a beat, ate all my peas when Diane wasn’t looking. We ended the evening with my stomach contents intact, but barely.
The next morning, as I was preparing my 18 month-old daughter’s daycare lunch, I remembered that we were provided with a parting gift of sautéed peas. I took them out of the fridge and proceeded to aliquot them into containers more suitable for a toddler. As I removed the lid, the onion-tinged aroma of Diane’s sautéed spring peas smacked me across my face. My body was clearly angry about what I had done to it the night before and, as if it were in a state of protest, I found myself sprinting to the bathroom where I began to puke.
From that day forth, I could not eat peas, let alone see or smell them, without eliciting extreme nausea. It didn’t matter what time of day, the mere presence of peas, although not necessary, was sufficient to make me toss my, well, peas.
It has long been known that nausea and vomiting are common symptoms of pregnancy. In fact, documentation of this phenomenon goes as far back as 2000 BC. However, the term “morning sickness” is a complete misnomer. For one, pregnancy-related nausea and vomiting is not just a morning thing. It can happen at any time of day. Second, the term “sickness” suggests a state of unhealthiness. We know that perfectly healthy pregnant women who deliver perfectly healthy babies experience morning sickness, and this type of nausea and vomiting is not an indicator of maternal and/or fetal health.
But, that doesn’t change the fact that it sucks.
Morning sickness, more appropriately known as nausea and vomiting in pregnancy (NVP), affects approximately two-thirds of women in their first trimester of pregnancy. In many cases, morning sickness subsides at the end of the first trimester. In other cases, the symptoms of morning sickness can last for the entire pregnancy. For both my pregnancies, I experienced morning sickness for the first 5 months.
I feel so lucky.
No one really knows the exact mechanisms responsible for the onset morning sickness. We do know that the drastic hormonal changes that occur during early pregnancy certainly play a role; however, these effects are likely indirect. For instance, estrogen levels do not differ between pregnant women with morning sickness and those who do not experience symptoms. Furthermore, there is no causal relationship between human chorionic gonadotropin (hCG), the early pregnancy hormone detected by pregnancy tests, and morning sickness, despite the fact that peak hCG levels and peak severity of pregnancy-related nausea and vomiting occur at approximately the same time.
Based on these observations, scientists suggest that the hormonal fluctuations in pregnant women can elicit different responses in an individual, rendering some extremely susceptible and others remarkably resistant to the same stimulus (with regard to nausea and vomiting). This begs the question: Is there a genetic predisposition to morning sickness?
While a “morning sickness” gene has not been identified, a few lines of evidence point toward a potential for inheriting the tendency. For instance, identical twins, are fairly likely to share a tendency to morning sickness. Also, you are more likely to experience morning sickness if your mom experienced it, too. Even though genetics may be involved, the onset of morning sickness is probably what scientists call “multifactorial,” a result of a very complex interaction between genetics and environment, making it difficult to find a treatment that is effective and safe for everyone.
Until more is known, we are stuck eating saltines and sour candy. At least it’s something, right?
Food aversions and morning sickness
Make them if you dare. Credit: Jeanne Garbarino.
For my first pregnancy, it was smoked salmon, which I probably shouldn’t have been eating in the first place. For my second pregnancy, it was peas. (Interestingly, my aunt Diane initially provided both foods, which, after that initial consumption, was immediately followed by the onset of morning sickness.) The mere sight of either peas or smoked salmon elicited an uncomfortable queasiness that often culminated with a sprint to the porcelain throne. Apparently, this type of experience is pretty normal.
Developing an aversion to a specific tastes and smells during pregnancy is an extremely common phenomenon. In fact, between 50–90% of pregnant women worldwide experience some level of food aversion, with the most common aversions being meat, fish, poultry, and eggs. Furthermore, research suggests that food aversions developed during pregnancy are actually novel as opposed to an exaggeration of a pre-existing dislike for a certain food.
Complementing the development of food aversions is the report that dietary changes in pregnant woman are often related to changes in olfaction, or sense of smell. More specifically, some pregnant women experience increased sensitivity to certain odors, and usually in an unpleasant way. This heightened sensitivity is thought to be protective against foods that could pose a problem for mother and baby, such as those that have become rancid.
When I was pregnant, the self-perceived powerfully pungent scent of peas could have probably knocked me over if it was translated into some other physical force. I wish I had a gas mask.
Is there some benefit to morning sickness?
In general, nausea and vomiting are a defense mechanism, acting to protect us from the accidental ingestion of toxins. While morning sickness is likely a very complicated condition that needs further study, a popular explanation suggests that morning sickness is beneficial to both mother and fetus.
Several lines of observations support this idea, formally called the “maternal and embryo protection hypothesis”: (a) peak sensitivity to morning sickness occurs at approximately the same time that embryo development is most susceptible to toxins and chemical agents; and (b) women who experience morning sickness during their pregnancy are less likely to miscarry compared to women who do not experience morning sickness.
In essence, the maternal and embryo protection hypothesis suggests that morning sickness is an adaptive process, contributing to evolutionary success (measured in terms of how many of your genes are present in later generations). However, morning sickness is not found in all societies. One possible explanation for this is that those societies that do not widely experience morning sickness are significantly more likely to have plant-based diets (meats spoil much faster than plants). Another argument against evolutionary adaptation is that morning sickness has been documented only in three other species: domestic dogs, captive rhesus macaques, and captive chimpanzees.
It makes sense that the pregnancy-related nausea and vomiting widely known as morning sickness is a means to help protect mom and baby. It makes sense that women have a mechanism to detect and/or expel toxins and potentially harmful microorganisms if ingested. But the idea that morning sickness is actually a product of evolution is still under debate.
And even as a biologist, if I ever have to go through morning sickness again, the idea that it could be protective won’t really bring me comfort as I am puking up my guts. But, biology is biology and sometimes we just have to deal with it.
Andrews, P. and Whitehead, S. Pregnancy Sickness. American Physiological Society. 1990 February;5: 5-10.
Flaxman, S.M. and Sherman, P.W. Morning Sickness: A mechanism for protecting mother and baby. The Quarterly Review of Biology. 2000 June; 75(2):
Goodwin, TM. Nausea and vomiting of pregnancy: an obstetric syndrome. American Journal Obstetrics and Gynecology. 2002; 185(5): 184-189.
Kich, K.L. Gastrointestinal factors in nausea and vomiting of pregnancy. American Journal Obstetrics and Gynecology. 2002; 185(5): 198-203.
Nordin, S., Broman, D.A., Olofsson, J.K., Wulff, M. A Longitudinal Descriptive Study of Self-reported Abnormal Smell and Taste Perception in Pregnant Women. Chemical Senses. 2004; 29 (5): 391-402
[Today's post first appeared at Dr. Kristina Killgrove's blog, Powered by Osteons. Kristina is a bioarchaeologist who studies the skeletons of ancient Romans to learn more about how they lived. Her biography at her blog begins, "When your life's passion is to study dead Romans, you often get asked for your 'origin story,' something that explains a long, abiding and, frankly, slightly creepy love for skeletons." Now that you undoubtedly want to know more, read the rest of her bio here, and then read below to learn why childbirth is so difficult and what the archaeological record has to tell us about outcomes for mother and child in the ancient world. For more about Kristina and her work, you can see her academic Website at Killgrove.org and find out about her latest research project at RomanDNAProject.org. You can also find her at herG+ page and on Twitter as @BoneGirlPhD.]
Basically since we started walking upright, childbirth has been difficult for women. Evolution selected for larger and larger brains in our hominin ancestors such that today our newborns have heads roughly 102% the size of the mother’s pelvic inlet width (Rosenberg 1992).
Yes, you read that right. Our babies’ heads are actually two percent larger than our skeletal anatomy.
Obviously, we’ve also evolved ways to get those babies out. Biologically, towards the end of pregnancy, a hormone is released that weakens the cartilage of the pelvic joints, allowing the bones to spread; and the fetus itself goes through a complicated movement to make its way down the pelvic canal, with its skull bones eventually sliding around and overlapping to get through the pelvis. Culturally, we have another way to deliver these large babies: the so-calledcaesarean section.
Up until the 20th century, childbirth was dangerous. Even today, in some less developed countries, roughly 1 maternal death occurs for every 100 live births, most of those related to obstructed labor or hemorrhage (WHO Fact Sheet 2010). If we project these figures back into the past, millions of women must have died during or just after childbirth over the last several millennia. You would think, then, that the discovery of childbirth-related burial – that is, of a woman with a fetal skeleton within her pelvis – would be common in the archaeological record. It’s not.
Archaeological Evidence of Death in Childbirth
Two recent articles in the International Journal of Osteoarchaeology start the exact same way, by explaining that “despite this general acceptance of the vulnerability of young females in the past, there are very few cases of pregnant woman (sic) reported from archaeological contexts” (Willis & Oxenham, In Press) and ”archaeological evidence for such causes of death is scarce and therefore unlikely to reflect the high incidence of mortality during and after labour” (Cruz & Codinha 2010:491).
The examples of burials of pregnant women that tend to get cited include two from Britain (both published in the 1970s), four from Scandinavia (published in the 1970s and 1980s), three from North America (published in the 1980s), one from Australia (1980s), one from Israel (1990s), six from Spain (1990s and 2000s), one from Portugal (2010), and one from Vietnam (2011) (most of these are cited in Willis & Oxenham). Additionally, I found some unpublished reports: a skeleton from Egypt, a body from the Yorkshire Wolds in England, and a skeleton from England.
The images of these burials are impressive: even more than child skeletons, these tableaux are pathos-triggering, they’re snapshots of two lives cut short because of an evolutionary trade-off.
The wide range of dates and geographical areas illustrated in the slideshow demonstrates quite clearly that death of the mother-fetus dyad is a biological consequence of being human. But what we have from archaeological excavations is still fewer than two dozen examples of possible childbirth-related deaths from allof human history.
Where are all the mother-fetus burials?
As with any bioarchaeological question, there are a number of reasons that we may or may not find evidence of practices we know to have existed in the past. Some key issues at play in recovering evidence of death in childbirth include:
Archaeological Theory and Methodology. From the dates of discovery of maternal-fetal death cited above, it’s obvious that these examples weren’t discovered until the 1970s. Why the 70s? It could be that the rise of feminist archaeology focused new attention on the graves of females, with archaeologists realizing the possibility that they would find maternal-fetal burials. Or it could be that the methods employed got better around this time: archaeologists began to sift dirt with smaller mesh screens and float it for small particles like seeds and fetal bones.
Death at Different Times. Although some women surely perished in the middle of childbirth, along with a fetus that was obstructed, in many cases delivery likely occurred, after which the mother, fetus, or both died. In modern medical literature, there are direct maternal deaths (complications of pregnancy, delivery, or recovery) and indirect maternal deaths (pregnancy-related death of a woman with preexisting or newly arisen health problems) recorded up to about 42 days postpartum. An infection related to delivery or severe postpartum hemorraging could easily have killed a woman in antiquity, leaving a viable newborn. Similarly, newborns can develop infections and other conditions once outside the womb, and infant mortality was high in preindustrial societies. With a difference between the time of death of the mother and child, a bioarchaeologist can’t say for sure that these deaths were related to childbirth. Even finding a female skeleton with a fetal skeleton inside it is not always a clear example, as there are forensic cases of coffin birth or postmortem fetal extrusion, when the non-viable fetus is spontaneously delivered after the death of the mother.
Cultural Practices. Another condition of being human is the ability to modify and mediate our biology through culture. So the final possibility for the lack of mother-fetus burials is a specific society’s cultural practices in terms of childbirth and burial. In the case of complicated childbirth (called dystocia in the medical literature), this is done through caesarean section (or C-section), a surgical procedure that dates back at least to the origins of ancient Rome.
Cultural Interventions in Childbirth
It’s often assumed that the term caesarean/cesarean section comes from the manner of birth ofJulius Caesar, but it seems that the Roman author Pliny may have just made this up. The written record of the surgical practice originated as the Lex Regia (royal law) with the second king of Rome, Numa Pompilius (c. 700 BC), and was renamed the Lex Caesarea (imperial law) during the Empire. The law is passed down through Justinian’s Digest (11.8.2) and reads:
Negat lex regia mulierem, quae praegnas mortua sit, humari, antequam partus ei excidatur: qui contra fecerit, spem animantis cum gravida peremisse videtur.
The royal law forbids burying a woman who died pregnant until her offspring has been excised from her; anyone who does otherwise is seen to have killed the hope of the offspring with the pregnant woman. [Translation mine]
Example of Roman gynaecological equipment: speculum From the House of the Surgeon, Pompeii (1st c AD) Photo credit: UVa Health Sciences Library
There’s discussion as to whether this law was instituted for religious reasons or for the more practical reason of increasing the population of tax-paying citizens. In spite of this law, though, there isn’t much historical evidence of people being born by C-section. Many articles claim the earliest attested C-section as having produced Gorgias, an orator from Sicily, in 508 BC (e.g., Boley 1991), but Gorgias wasn’t actually born until 485 BC and I couldn’t find a confirmatory source for this claim. Pliny, however, noted that Scipio Africanus, a celebrated Roman general in the Second Punic War, was born by C-section (Historia Naturalis VII.7); if this fact is correct, the earliest confirmation that the surgery could produce viable offspring dates to 236 BC.
This practice in the Roman world is not the same as our contemporary idea of C-section. That is, the mother was not expected to survive and, in fact, most of the C-sections in Roman times were likely carried out following the death of the mother. Until about the 1500s, when the French physician François Rousset broke with tradition and advocated performing C-sections on living women, the procedure was performed only as a last-ditch effort to save the neonate. Some women definitely survived C-sections from the 16th to 19th centuries, but it was still a risky procedure that could easily lead to complications like endometritis or other infection. Following advances in antibiotics around 1940, though, C-sections became more common because, most importantly, they were much more survivable.
Caesarean Sections and Roman Burials
Roman relief showing a birthing scene Tomb of a Midwife (Tomb 100), Isola Sacra Photo credit: magistrahf on Flickr
In spite of the Romans’ passion for recordkeeping, there’s very little evidence of C-sections. It’s unclear how religiously the Lex Regia/Caesarea was followed in Roman times, which means it’s unclear how often the practice of C-section occurred. Would all women have been subject to these laws? Just the elite or just citizens? How often did the section result in a viable newborn? Who performed the surgery? It probably wasn’t a physician (since men didn’t generally attend births), but a midwife wouldn’t have been trained to do it either (Turfa 1994).
Whereas we can supplement the historical record with bioarchaeological evidence to understand Romans’ knowledge of anatomy, their consumption of lead sugar, or the practice of crucifixion, this isn’t possible with C-sections – the surgery is done in soft tissue only, meaning we’d have to find a mummy to get conclusive evidence of an ancient C-section.
We can make the hypothesis, though, that because of the Lex Regia/Caesarea, we should findno evidence in the Roman world of a woman buried with a fetus still inside her. This hypothesis, though, is quickly negated by two reported cases – one from Kent in the Romano-British period and one from Jerusalem in the 4th century AD. The burial from Kent hasn’t been published, although there is a photograph in the slide show above.
Interestingly, the Jerusalem find was studied and reported by Joe Zias, who also analyzed theonly known case of crucifixion to date. Zias and colleagues report on the find in Nature(1993) and in an edited volume (1995), but their primary goal was to disseminate information about the presence of cannabis in the tomb (and its supposed role in facilitating childbirth), so there’s no picture and the information about the skeletons is severely lacking:
We found the skeletal remains of a girl (sic) aged about 14 at death in an undisturbed family burial tomb in Beit Shemesh, near Jerusalem. Three bronze coins found in the tomb dating to AD 315-392 indicate that the tomb was in use during the fourth century AD. We found the skeletal remains of a full-term (40-week) fetus in the pelvic area of the girl, who was lying on her back in an extended position, apparently in the last stages of pregnancy or giving birth at the time of her death… It seems likely that the immature pelvic structure through which the full-term fetus was required to pass was the cause of death in this case, due to rupture of the cervix and eventual haemorrhage (Zias et al. 1993:215).
Both Roman-era examples involve young women, and it is quite interesting that they were already fertile. Age at menarche in the Roman world depended on health, which in turn depended on status, but it’s generally accepted that menarche happened around 14-15 years old and that fertility lagged behind until 16-17, meaning for the majority of the Roman female population, first birth would not occur until at least 17-19 years of age (Hopkins 1965, Amundsen & Diers 1969). These numbers have led demographers like Tim Parkin (1992:104-5) to note that pregnancy was likely not a major contributor to premature death among Roman women. But the female pelvis doesn’t reach skeletal maturity until the late teens or early 20s, so complications from the incompatibility in pelvis size versus fetal head size are not uncommon in teen pregnancies, even today (Gilbert et al. 2004).
More interesting than the young age at parturition is the fact that both of these young women were likely buried with their fetuses still inside them, in direct violation of the Lex Caesarea. So it remains unclear whether this law was ever prosecuted, or if the application of the law varied based on location (these young women were both from the provinces), social status (both young women were likely higher status), or time period. Why wasn’t medical intervention, namely C-section, attempted on these young women? It’s possible that further context clues from the cemeteries and associated settlements could give us more information about medical practices in these specific locales, but neither the Zias articles nor the Kent report make this information available.
Childbirth – Biological or Cultural?
Childbirth is both a biological and a cultural process. While biological variation is consistent across all human populations, the cultural processes that can facilitate childbirth are quite varied. The evidence that bioarchaeologists use to reconstruct childbirth in the past includes skeletons of mothers and their fetuses; historical records of births, deaths, and interventions; artifacts that facilitate delivery; and context clues from burials. The brief case study of death in childbirth in the Roman world further shows that history alone is insufficient to understand the process of childbirth, the complications inherent in it, and the form of burial that results. In order to develop a better understanding of childbirth through time, it’s imperative that archaeologists pay close attention when excavating graves, meticulously document their findings, and publish any evidence of death in childbirth.
D.W. Amundsen, & C.J. Diers (1969). The age of menarche in Classical Greece and Rome. Human Biology, 41 (1), 125-132. PMID: 4891546.
J.P. Boley (1991). The history of caesarean section. Canadian Medical Association Journal, 145 (4), 319-322. [PDF]
S. Crawford (2007). Companions, co-incidences or chattels? Children in the early Anglo-Saxon multiple burial ritual. In Children, Childhood & Society, S. Crawford and G. Shepherd, eds. BAR International Series 1696, Chapter 8. [PDF]
C. Cruz, & S. Codinha (2010). Death of mother and child due to dystocia in 19th century Portugal. International Journal of Osteoarchaeology, 20, 491-496. DOI: 10.1002/oa.1069.
W. Gilbert, D. Jandial, N. Field, P. Bigelow, & B. Danielsen (2004). Birth outcomes in teenage pregnancies. Journal of Maternal-Fetal and Neonatal Medicine, 16 (5), 265-270. DOI:10.1080/14767050400018064.
K. Hopkins (1965). The age of Roman girls at marriage. Population Studies, 18 (3), 309-327. DOI: 10.2307/2173291.
E. Lasso, M. Santos, A. Rico, J.V. Pachar, & J. Lucena (2009). Postmortem fetal extrusion. Cuadernos de Medicina Forense, 15 (55), 77-81. [HTML - Warning:Graphic images!]
T. Parkin (1992). Demography and Roman society. Baltimore: Johns Hopkins University Press.
K. Rosenberg (1992). The evolution of modern human childbirth. American Journal of Physical Anthropology, 35 (S15), 89-124. DOI: 10.1002/ajpa.1330350605. J.M. Turfa (1994). Anatomical votives and Italian medical traditions. In: Murlo and the Etruscans, edited by R.D. DePuma and J.P. Small. University of Wisconsin Press.
C. Wells (1975). Ancient obstetric hazards and female mortality. Bulletin of the New York Academy of Medicine, 51 (11), 1235-49. PMID: 1101997.
A. Willis, & M. Oxenham (In press). A Case of Maternal and Perinatal Death in Neolithic Southern Vietnam, c. 2100-1050 BCE. International Journal of Osteoarchaeology, 1-9. DOI:10.1002/oa.1296.
J. Zias, H. Stark, J. Seligman, R. Levy, E. Werker, A. Breuer & R. Mechoulam (1993). Early medical use of cannabis. Nature, 363 (6426), 215-215. DOI: 10.1038/363215a0.
J. Zias (1995). Cannabis sativa (hashish) as an effective medication in antiquity: the anthropological evidence. In: S. Campbell & A. Green, eds., The Archaeology of Death in the Ancient Near East, pp. 232-234.
Note: Thanks to Marta Sobur for helping me gain access to the Zias 1995 article, and thanks toSarah Bond for helping me track down the Justinian reference.