Childbirth and C-sections in pre-modern times

[Today's post first appeared at Dr. Kristina Killgrove's blog, Powered by Osteons. Kristina is a bioarchaeologist who studies the skeletons of ancient Romans to learn more about how they lived. Her biography at her blog begins, "When your life's passion is to study dead Romans, you often get asked for your 'origin story,' something that explains a long, abiding and, frankly, slightly creepy love for skeletons." Now that you undoubtedly want to know more, read the rest of her bio here, and then read below to learn why childbirth is so difficult and what the archaeological record has to tell us about outcomes for mother and child in the ancient world. For more about Kristina and her work, you can see her academic Website at  and find out about her latest research project at You can also find her at her G+ page and on Twitter as @BoneGirlPhD.]

Basically since we started walking upright, childbirth has been difficult for women.  Evolution selected for larger and larger brains in our hominin ancestors such that today our newborns have heads roughly 102% the size of the mother’s pelvic inlet width (Rosenberg 1992).

Yes, you read that right. Our babies’ heads are actually two percent larger than our skeletal anatomy

Fetal head and mother’s pelvic inlet width
Photo credit:

Obviously, we’ve also evolved ways to get those babies out.  Biologically, towards the end of pregnancy, a hormone is released that weakens the cartilage of the pelvic joints, allowing the bones to spread; and the fetus itself goes through a complicated movement to make its way down the pelvic canal, with its skull bones eventually sliding around and overlapping to get through the pelvis.  Culturally, we have another way to deliver these large babies: the so-calledcaesarean section.

Up until the 20th century, childbirth was dangerous.  Even today, in some less developed countries, roughly 1 maternal death occurs for every 100 live births, most of those related to obstructed labor or hemorrhage (WHO Fact Sheet 2010).  If we project these figures back into the past, millions of women must have died during or just after childbirth over the last several millennia.  You would think, then, that the discovery of childbirth-related burial – that is, of a woman with a fetal skeleton within her pelvis – would be common in the archaeological record.  It’s not.

Archaeological Evidence of Death in Childbirth

Two recent articles in the International Journal of Osteoarchaeology start the exact same way, by explaining that “despite this general acceptance of the vulnerability of young females in the past, there are very few cases of pregnant woman (sic) reported from archaeological contexts” (Willis & Oxenham, In Press) and ”archaeological evidence for such causes of death is scarce and therefore unlikely to reflect the high incidence of mortality during and after labour” (Cruz & Codinha 2010:491).

The examples of burials of pregnant women that tend to get cited include two from Britain (both published in the 1970s), four from Scandinavia (published in the 1970s and 1980s), three from North America (published in the 1980s), one from Australia (1980s), one from Israel (1990s), six from Spain (1990s and 2000s), one from Portugal (2010), and one from Vietnam (2011) (most of these are cited in Willis & Oxenham).  Additionally, I found some unpublished reports: a skeleton from Egypt, a body from the Yorkshire Wolds in England, and a skeleton from England.

The images of these burials are impressive: even more than child skeletons, these tableaux are pathos-triggering, they’re snapshots of two lives cut short because of an evolutionary trade-off.

The wide range of dates and geographical areas illustrated in the slideshow demonstrates quite clearly that death of the mother-fetus dyad is a biological consequence of being human.  But what we have from archaeological excavations is still fewer than two dozen examples of possible childbirth-related deaths from all of human history.

Where are all the mother-fetus burials?

As with any bioarchaeological question, there are a number of reasons that we may or may not find evidence of practices we know to have existed in the past.  Some key issues at play in recovering evidence of death in childbirth include:
  • Archaeological Theory and Methodology.  From the dates of discovery of maternal-fetal death cited above, it’s obvious that these examples weren’t discovered until the 1970s.  Why the 70s?  It could be that the rise of feminist archaeology focused new attention on the graves of females, with archaeologists realizing the possibility that they would find maternal-fetal burials.  Or it could be that the methods employed got better around this time: archaeologists began to sift dirt with smaller mesh screens and float it for small particles like seeds and fetal bones.
  • Death at Different Times.  Although some women surely perished in the middle of childbirth, along with a fetus that was obstructed, in many cases delivery likely occurred, after which the mother, fetus, or both died.  In modern medical literature, there are direct maternal deaths (complications of pregnancy, delivery, or recovery) and indirect maternal deaths (pregnancy-related death of a woman with preexisting or newly arisen health problems) recorded up to about 42 days postpartum.  An infection related to delivery or severe postpartum hemorraging could easily have killed a woman in antiquity, leaving a viable newborn.  Similarly, newborns can develop infections and other conditions once outside the womb, and infant mortality was high in preindustrial societies.  With a difference between the time of death of the mother and child, a bioarchaeologist can’t say for sure that these deaths were related to childbirth.  Even finding a female skeleton with a fetal skeleton inside it is not always a clear example, as there are forensic cases of coffin birth or postmortem fetal extrusion, when the non-viable fetus is spontaneously delivered after the death of the mother.
  • Cultural Practices.  Another condition of being human is the ability to modify and mediate our biology through culture.  So the final possibility for the lack of mother-fetus burials is a specific society’s cultural practices in terms of childbirth and burial.  In the case of complicated childbirth (called dystocia in the medical literature), this is done through caesarean section (or C-section), a surgical procedure that dates back at least to the origins of ancient Rome.

Cultural Interventions in Childbirth

It’s often assumed that the term caesarean/cesarean section comes from the manner of birth ofJulius Caesar, but it seems that the Roman author Pliny may have just made this up. The written record of the surgical practice originated as the Lex Regia (royal law) with the second king of Rome, Numa Pompilius (c. 700 BC), and was renamed the Lex Caesarea (imperial law) during the Empire.  The law is passed down through Justinian’s Digest (11.8.2) and reads:

Negat lex regia mulierem, quae praegnas mortua sit, humari, antequam partus ei excidatur: qui contra fecerit, spem animantis cum gravida peremisse videtur.

The royal law forbids burying a woman who died pregnant until her offspring has been excised from her; anyone who does otherwise is seen to have killed the hope of the offspring with the pregnant woman. [Translation mine]
Example of Roman gynaecological equipment: speculum
From the House of the Surgeon, Pompeii (1st c AD)
Photo credit: UVa Health Sciences Library

There’s discussion as to whether this law was instituted for religious reasons or for the more practical reason of increasing the population of tax-paying citizens.  In spite of this law, though, there isn’t much historical evidence of people being born by C-section.  Many articles claim the earliest attested C-section as having produced Gorgias, an orator from Sicily, in 508 BC (e.g., Boley 1991), but Gorgias wasn’t actually born until 485 BC and I couldn’t find a confirmatory source for this claim.  Pliny, however, noted that Scipio Africanus, a celebrated Roman general in the Second Punic War, was born by C-section (Historia Naturalis VII.7); if this fact is correct, the earliest confirmation that the surgery could produce viable offspring dates to 236 BC.

This practice in the Roman world is not the same as our contemporary idea of C-section.  That is, the mother was not expected to survive and, in fact, most of the C-sections in Roman times were likely carried out following the death of the mother.  Until about the 1500s, when the French physician François Rousset broke with tradition and advocated performing C-sections on living women, the procedure was performed only as a last-ditch effort to save the neonate.  Some women definitely survived C-sections from the 16th to 19th centuries, but it was still a risky procedure that could easily lead to complications like endometritis or other infection.  Following advances in antibiotics around 1940, though, C-sections became more common because, most importantly, they were much more survivable.

Caesarean Sections and Roman Burials

Roman relief showing a birthing scene
Tomb of a Midwife (Tomb 100), Isola Sacra
Photo credit: magistrahf on Flickr

In spite of the Romans’ passion for recordkeeping, there’s very little evidence of C-sections.  It’s unclear how religiously the Lex Regia/Caesarea was followed in Roman times, which means it’s unclear how often the practice of C-section occurred.  Would all women have been subject to these laws?  Just the elite or just citizens?  How often did the section result in a viable newborn?  Who performed the surgery?  It probably wasn’t a physician (since men didn’t generally attend births), but a midwife wouldn’t have been trained to do it either (Turfa 1994).

Whereas we can supplement the historical record with bioarchaeological evidence to understand Romans’ knowledge of anatomy, their consumption of lead sugar, or the practice of crucifixion, this isn’t possible with C-sections – the surgery is done in soft tissue only, meaning we’d have to find a mummy to get conclusive evidence of an ancient C-section.

We can make the hypothesis, though, that because of the Lex Regia/Caesarea, we should findno evidence in the Roman world of a woman buried with a fetus still inside her.  This hypothesis, though, is quickly negated by two reported cases – one from Kent in the Romano-British period and one from Jerusalem in the 4th century AD. The burial from Kent hasn’t been published, although there is a photograph in the slide show above.

Interestingly, the Jerusalem find was studied and reported by Joe Zias, who also analyzed theonly known case of crucifixion to date.  Zias and colleagues report on the find in Nature(1993) and in an edited volume (1995), but their primary goal was to disseminate information about the presence of cannabis in the tomb (and its supposed role in facilitating childbirth), so there’s no picture and the information about the skeletons is severely lacking:

We found the skeletal remains of a girl (sic) aged about 14 at death in an undisturbed family burial tomb in Beit Shemesh, near Jerusalem.  Three bronze coins found in the tomb dating to AD 315-392 indicate that the tomb was in use during the fourth century AD.  We found the skeletal remains of a full-term (40-week) fetus in the pelvic area of the girl, who was lying on her back in an extended position, apparently in the last stages of pregnancy or giving birth at the time of her death… It seems likely that the immature pelvic structure through which the full-term fetus was required to pass was the cause of death in this case, due to rupture of the cervix and eventual haemorrhage (Zias et al. 1993:215).

Both Roman-era examples involve young women, and it is quite interesting that they were already fertile.  Age at menarche in the Roman world depended on health, which in turn depended on status, but it’s generally accepted that menarche happened around 14-15 years old and that fertility lagged behind until 16-17, meaning for the majority of the Roman female population, first birth would not occur until at least 17-19 years of age (Hopkins 1965, Amundsen & Diers 1969).  These numbers have led demographers like Tim Parkin (1992:104-5) to note that pregnancy was likely not a major contributor to premature death among Roman women.  But the female pelvis doesn’t reach skeletal maturity until the late teens or early 20s, so complications from the incompatibility in pelvis size versus fetal head size are not uncommon in teen pregnancies, even today (Gilbert et al. 2004).

More interesting than the young age at parturition is the fact that both of these young women were likely buried with their fetuses still inside them, in direct violation of the Lex Caesarea.  So it remains unclear whether this law was ever prosecuted, or if the application of the law varied based on location (these young women were both from the provinces), social status (both young women were likely higher status), or time period.  Why wasn’t medical intervention, namely C-section, attempted on these young women?  It’s possible that further context clues from the cemeteries and associated settlements could give us more information about medical practices in these specific locales, but neither the Zias articles nor the Kent report make this information available.

Childbirth – Biological or Cultural?

Childbirth is both a biological and a cultural process.  While biological variation is consistent across all human populations, the cultural processes that can facilitate childbirth are quite varied.  The evidence that bioarchaeologists use to reconstruct childbirth in the past includes skeletons of mothers and their fetuses; historical records of births, deaths, and interventions; artifacts that facilitate delivery; and context clues from burials.  The brief case study of death in childbirth in the Roman world further shows that history alone is insufficient to understand the process of childbirth, the complications inherent in it, and the form of burial that results.  In order to develop a better understanding of childbirth through time, it’s imperative that archaeologists pay close attention when excavating graves, meticulously document their findings, and publish any evidence of death in childbirth.

Further Reading:

This post was chosen as an Editor's Selection for ResearchBlogging.orgD.W. Amundsen, & C.J. Diers (1969). The age of menarche in Classical Greece and Rome. Human Biology, 41 (1), 125-132. PMID: 4891546.

J.P. Boley (1991). The history of caesarean section. Canadian Medical Association Journal, 145 (4), 319-322. [PDF]

S. Crawford (2007). Companions, co-incidences or chattels? Children in the early Anglo-Saxon multiple burial ritual.  In Children, Childhood & Society, S. Crawford and G. Shepherd, eds.  BAR International Series 1696, Chapter 8. [PDF]

C. Cruz, & S. Codinha (2010). Death of mother and child due to dystocia in 19th century Portugal. International Journal of Osteoarchaeology, 20, 491-496. DOI: 10.1002/oa.1069.

W. Gilbert, D. Jandial, N. Field, P. Bigelow, & B. Danielsen (2004). Birth outcomes in teenage pregnancies. Journal of Maternal-Fetal and Neonatal Medicine, 16 (5), 265-270. DOI:10.1080/14767050400018064.

K. Hopkins (1965). The age of Roman girls at marriage. Population Studies, 18 (3), 309-327. DOI: 10.2307/2173291.

E. Lasso, M. Santos, A. Rico, J.V. Pachar, & J. Lucena (2009). Postmortem fetal extrusion. Cuadernos de Medicina Forense, 15 (55), 77-81. [HTML - Warning: Graphic images!]

T. Parkin (1992).  Demography and Roman society.  Baltimore: Johns Hopkins University Press.

K. Rosenberg (1992). The evolution of modern human childbirth. American Journal of Physical Anthropology, 35 (S15), 89-124. DOI: 10.1002/ajpa.1330350605
J.M. Turfa (1994). Anatomical votives and Italian medical traditions. In: Murlo and the Etruscans, edited by R.D. DePuma and J.P. Small. University of Wisconsin Press.

C. Wells (1975). Ancient obstetric hazards and female mortality. Bulletin of the New York Academy of Medicine, 51 (11), 1235-49. PMID: 1101997.

A. Willis, & M. Oxenham (In press). A Case of Maternal and Perinatal Death in Neolithic Southern Vietnam, c. 2100-1050 BCE. International Journal of Osteoarchaeology, 1-9. DOI:10.1002/oa.1296.

J. Zias, H. Stark, J. Seligman, R. Levy, E. Werker, A. Breuer & R. Mechoulam (1993). Early medical use of cannabis. Nature, 363 (6426), 215-215. DOI: 10.1038/363215a0.

J. Zias (1995). Cannabis sativa (hashish) as an effective medication in antiquity: the anthropological evidence. In: S. Campbell & A. Green, eds., The Archaeology of Death in the Ancient Near East, pp. 232-234.

Note: Thanks to Marta Sobur for helping me gain access to the Zias 1995 article, and thanks toSarah Bond for helping me track down the Justinian reference.

Biology Explainer: The big 4 building blocks of life–carbohydrates, fats, proteins, and nucleic acids

The short version
  • The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
  • Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
  • Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
  • Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.                                                                                                      
  • The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.


Big Molecules with Small Building Blocks

The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.

We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.

You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.

When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.

Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.

The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.

Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.

On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.

The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!

If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.

The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?

If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.

In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.

Sugar and Fuel

A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.

Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.

Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.

Polysaccharides: Fuel and Form

Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.

Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.

Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.

Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.

The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.

Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.

The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.

That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.

These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.

Lipids: The Fatty Trifecta

Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.

Fats: the Good, the Bad, the Neutral

Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?

Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows.  Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.

Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.

Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.

Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.

The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.

You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.

In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.

A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.

Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.

Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.

Phospholipids: An Abundant Fat

You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.

Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.

There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.

Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.

The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.

Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.

Steroids: Here to Pump You Up?

Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.

But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.

Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.

Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.


As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.

Levels of Structure

Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.

For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.

This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.

Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.

The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.

In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.

A Plethora of Purposes

What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.

As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.

Nucleic Acids

How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.

Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.

DNA vs. RNA: A Matter of Structure

DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.

So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.

RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.

DNA vs. RNA: Function Wars

An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.

These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.

RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.

 By Emily Willingham, DXS managing editor 
This material originally appeared in similar form in Emily Willingham’s Complete Idiot’s Guide to College Biology

Modern Astronomers

This edition of the Notable Women in Science series presents modern astronomers. Many of these women are currently working in fields of research or have recently retired. As before, pages could be written about each of these women, but I have limited information to a summary of their education, work, and selected achievements. Many of these blurbs have multiple links, which I encourage you to visit to read extended biographies and learn about their current research interests.

From L to R: Anne Kinney, NASA Goddard Space Flight Center, Greenbelt, Md.; Vera Rubin, Dept. of Terrestrial Magnetism, Carnegie Institute of Washington; Nancy Grace Roman Retired NASA Goddard; Kerri Cahoy, NASA Ames Research Center, Moffett Field, Calif.; Randi Ludwig. University of Texas, Austin, Texas.
Vera Cooper Rubin was making advancements decades ahead of popularity of her research topic.  She received her B.A. from Vassar College, M.A. from Cornell University, and her Ph.D. from Georgetown University in the 1940s and 50s. She continued at Georgetown University as a research astronomer then assistant professor, and then moved to the Carnegie Institution. Among her honors is her election to the National Academy of Sciences and receiving the National Medal of Science, Gold Medal of the Royal Astronomical Society. She was only the second female recipient of this medal, the first being Caroline Herschel. She has had an asteroid and the Rubin-Ford effect named after her. She is currently enjoying her retirement.

Dr. Nancy Roman
Nancy Grace Roman has a lifetime love for astronomy. She received her B.A. from Swarthmore College and Ph.D. from the University of Chicago in the 1940s. She started her career as a research associate and instructor at Yerkes Observatory, but moved on due to a low likelihood of tenure because of her gender. She eventually moved through chief and scientist positions to Head of the Astronomical Data Center at NASA. She was the first female to hold an executive position at NASA. She has received honorary D.Sc. from several colleges and has received several awards, including the American Astronautical Society’s William Randolf Lovelace II Award and the Women in Aerospace’s LIfetime Achievement Award. She is currently continuing to inspire young girls to dream big by consulting and lecturing by invitation at venues across the U.S.

Catharine (Katy) D. Garmany researches the hottest stars. Dr. Garmany earned her B.S. from Indiana University and her M.A. and Ph.D. from the University of Virginia in the 1960s and 70s. She continued with research and teaching at several academic institutions. She has served as past president of the Astronomical Society of the Pacific and received the Annie Jump Cannon Award. She is currently associated with the National Optical Astronomy Observatory with several projects.

Dr. Elizabeth Roemer
Elizabeth Roemer is a premier recoverer of “lost” comets. She received her B.A.  and Ph.D. from University of California – Berkeley in the 1950s. She spent some time as a researcher at U.S. Observatories before going to the University of Arizona and moving through the professorial ranks. She has received several awards, including Mademoiselle Merit Award, one of only four recipients of the Benjamin Apthorp Gould Prize from the National Academy of Sciences, and a NASA Special Award. She is currently Professor Emerita at the University of Arizona with research interests in comets and minor planets (“asteroids”), including positions (astrometry), motions, and physical characteristics, especially of those objects that approach the Earth’s orbit.

Margaret Joan Geller is a widely respected cosmologist. She received her A.B. from the University of California-Berkeley, and M.A. and Ph.D. from Princeton University in the 1970s. She moved through the professorial ranks at Harvard University and is currently an astrophysicist at the Smithsonian Astrophysical Observatory. Some of her awards include the MacArthur “Genius” Award and the James Craig Watson Award from the National Academy of Sciences. She continues to provide public education in science through written, audio, and video media.

In 1995, the majestic spiral galaxy NGC 4414 was imaged by the Hubble Space Telescope as part of the HST Key Project on the Extragalactic Distance Scale. An international team of astronomers, led by Dr. Wendy Freedman of the Observatories of the Carnegie Institution of Washington, observed this galaxy on 13 different occasions over the course of two months.

Wendy Laurel Freedman is concerned with the fundamental question”How old is the universe?”  She received her B.S., M.S., and Ph.D. from the University of Toronto in the 1970s and 80s. After earning her Ph.D. she joined Observatories of the Carnegie Institution in Pasadena, California as a postdoctoral fellow and became faculty a few years later, as the first woman to join the Observatory’s permanent scientific staff. She has received several awards and honors, among them the Gruber Cosmology Prize. Her current work is focusing on the Giant Magellan Telescope and the questions it will answer. 

Sandra Moore Faber researches the origin of the universe. Dr. Faber earned her B.A. from Swarthmore College and her Ph.D. from Harvard University in the 1960s and 70s. She joined the Lick Observatory at the University of California – Santa Cruz and moved through the Astronomer and Professorial rankings. Her achievements include being elected to the National Academy of Sciences, the Heineman Prize, a NASA Group Achievement Award, Harvard Centennial Medal, and the Bower Award. She continues to research the formation and evolution of galaxies and the evolution ofstructure in the universe.

Dr. Heidi Hammel

Heidi Hammel is known as an excellent science communicator, researcher, andleader. She earned her B.S. from Massachusetts Institute of Technology and Ph.D. from the University of Hawaii in the 1980s. At NASA she led the imaging team of the Voyager 2’s encounter with Neptune and became known for her science communication for it.  She returned to MIT as a scientist for nearly a decade. Among her honors, she has received Vladimir Karpetoff Award , Klumpke-Roberts Award, and the Carl Sagan Medal.  She is currently at the Space Science Institute with a research focused on ground- and space-based studies of Uranus and Neptune.

Judith Sharn Young was inspired by black holes. She earned her B.A. from Harvard University and her M.S. and Ph.D. from the University of Minnesota in the 1970s. She began her academic career at the University of Massachusetts – Amherst, proceeding through the professorial ranks. She has earned several honors, including the Annie Jump Cannon Prize, the Maria Goeppert-Mayer Award, and a Sloan Research Fellowship. She is currently teaching and researching galaxies and imaging at the University of Massachusetts. 

Jocelyn Bell Burnell is the discoverer of pulsars. She earned her B.Sc. from the University of Glasgow and her Ph.D. from Cambridge University in the 1960s. After her graduation, she worked at the University of Southampton in research and teaching, and continued to work in research positions at several institutions. She is well known for her discovery of pulsars, which earned her research advisor a Nobel Prize. Among her awards are the Albert A. Michelson Prize, Beatrice Tinsley Prize, Herschel Medal, Magellanic Premium, and Grote Reber Metal. She has received honorary doctorates from Williams College, Harvard University, and the University of Durham. She is currently Professor of Physics and Department Chair at the Open University, England. 

Awards Mentioned:
The National Academy of Sciences is composed of select scientists who are leaders in their fields.
The National Medal of Science is a presidential award given to physical, biological, mathematical, or engineering scientists who have contributed outstanding knowledge to their field. 
The Gold Medal of the Royal Astronomical Society is the society’s highest honor given in astronomy
American Astronautical Society’s William Randolf Lovelace II Award recognizes outstanding contributions to space science.
The Women in Aerospace’s Lifetime Achievement Award is given for contributions to aerospace science over a career spanning 25 years. 
The Annie Jump Cannon Award is given for outstanding research a doctoral student in astronomy with promise of future excellence. 
The Mademoiselle Merit Award was presented annually to young women showing the promise of great achievement.
The Benjamin Apthorp Gould Prize is given in recognition of scientific accomplishments by an American citizen. 
The NASA Special Award is given for exceptional work.
The MacArthur “Genius” Award is given to those who show exception merit and promise in creative work. 
The James Craig Watson Award is given for contributions in astronomy. 
The Gruber Cosmology Prize is given for fundamental advances in our understanding by a scientists. 
The Heineman Prize is given for outstanding work in the field of astrophysics. 
The NASA Group Achievement Award is given for accomplishment that advances NASA mission. 
The Harvard Centennial Medal is given to graduates of Harvard who have contributed to society upon graduation. 
The Bower Award is given for achievement in science. 
The Vladimir Karapetoff Award is given for outstanding technical achievement. 
The Klumpke-Roberts Award is given for enhancing public understanding and appreciation of astronomy. 
The Carl Sagan Medal is awarded for outstanding communication to the public about planetary science. 
The Maria Goeppert-Mayer Award is given to a female physicist for outstanding achievement in her early career. 
The Albert A. Michelson Prize is given for technical and professional achievement. 
The Beatrice Tinsley Prize is given for outstanding research contribution to astronomy or astrophysics. 
The Herschel Medal is given for investigations of outstanding merit in astrophysics.
The Magellanic Premium Medal is awarded for a discovery or invention advancing navigation or astronomy.

Much of the information for this post came from the book Notable Women in the Physical Sciences: A Biographical Dictionary edited by Benjamin F. Shearer and Barbara S. Shearer.

Adrienne M Roehrich, Double X Science Chemistry Editor

Ask not what science can do for you

Coast Guard Lt. Cmdr. Kimberly Roman, a general physician,
examines a Trinidadian woman at the Couva District Health Facility

My workaday business is scientific editing. I just completed a behemoth job of hundreds of pages, all focused on reporting the findings of clinical trials (meaning trials involving humans instead of other animals) of a drug that keeps people alive. Among those trials was one in which healthy people participated, which is one way that companies who develop therapies test their treatments. It’s important to know what outcomes are in healthy people as well as those who are targets of the therapy.

I read in these papers how the healthy people responded to the therapy–how they underwent needle sticks for blood draws so that researchers could analyze seemingly every last chemical in their blood, how they dealt with side effects minor and greater, including headaches, vomiting, and other distress, and how their participation helped researchers determine the need for a lower dose. As I read about them and the details of their participation, I though, “Wow.” Here are these healthy people entering clinical trials–yes, they do get paid–and their participation helps guide the application of these therapies for people who would die without them. That is some citizen science.

If you’ve ever taken an FDA-approved drug for anything, you’ve benefited from these people–paid or unpaid–who have entered into clinical trials. We’re all beneficiaries of their contributions, their blood draws, urine samples, headaches, gastrointestinal distress, and time away from their families. And when it comes to women, we can contribute to these trials in many, many ways.

Becoming a part of clinical research means being a part of the practice of science. When I think of the importance of women in clinical research, I think about women like Elizabeth Glaser, who established the Elizabeth Glaser Pediatric AIDS foundation before she–and one of her two children–died of AIDS. Part of the foundation’s focus is funding research into AIDS prevention and cure in children. Elizabeth contracted HIV while receiving a blood transfusion during the birth of her daughter, Ariel, and she passed the virus to both her daughter and her son, Jake, who followed. Ariel died in 1988, but Jake is now a healthy adult, still alive in part thanks to his mother’s work to fund research and to people who participate in clinical trials for therapies against HIV/AIDS. 

Today, December 1, is World AIDS Day. The theme for this year’s day is, “Leading with science, uniting for action.” Since the advent of the first-reported cases of HIV in 1981, more than 25 million people have died of AIDS worldwide. In 2008, 2 million people died, in spite of therapies that now save lives. Almost everyone who now lives with HIV lives in low- and middle-income countries and has no access to these effective therapies. There still is no cure for HIV. 

In the United States, about 1 million people have an HIV infection. Of these, women represent about 27% of new infections each year and 25% of those infected. Clinical trials are one critical way that these women–and their children–can have medical interventions they need to remain healthy. It is one way to lead with science, to unite for action.

Not every day is World AIDS Day, but every day, someone, somewhere–a woman, mother, sister, daughter–needs medical interventions. Historically, women have been underrepresented in clinical studies. Mother, scientist, and four-time breast cancer survivor Susan Niebur, now in deep pain from metastatic breast cancer, has called–repeatedly–for more research into fighting metastatic breast cancer. As she notes, no woman survives this cancer. Thirty percent of cases of breast cancer progress to metastatic (spreading) breast cancer, yet only 3% of funding goes to researching it, even as most women diagnosed with it die within three years. Niebur observes that wearing a ribbon does not cure cancer. She writes, “I just want more time.” 

Part of giving women with breast cancer more time is participating in clinical research studies–studies that need both women who have cancer and women who do not–so that research can advance, drugs in the pipeline can move forward in testing. As Niebur has written, we need an Army of Women willing to get into the trenches of research, get needle sticks, give up urine, and possibly vomit occasionally, so that other women–all women–can benefit from clinical research.

If that need on behalf of other XXers isn’t sufficient, keep in mind that participation in trials can also include other benefits. More and more women are finding that participation pays, literally, sometimes in the thousands of dollars. But it’s not just the money–some women have even reported that their participation has led them to better health, given them more time to spend with their children as they make this money in a few days at a time throughout the year. These are not trivial benefits, and the contributions women make when they participate in trials are not trivial either.

Would you like to learn more about clinical trials, how they work, and where you might find one in which you could participate? A place to start is, a database of ongoing and past trials in the United States and around the world. After all, in spite of all of those personal benefits for a participant, the most important part for those who suffer and die is that you participate. In this case, you do not ask science what it can do for you. You ask, on behalf of girls and women and everyone everywhere, What can you do for science?

Emily Willingham 

Double Xplainer: Once in a Blue Moon

Full Moon, from Flickr user Proggie under
Creative Commons license.
Tonight—August 31, 2012— is the second full Moon of August. The last time two full Moons occurred in the same month was in 2010, and the next will be in 2015, so while the events are rare, they aren’t terribly uncommon either. In fact, you’ve probably heard the second full Moon given a name: “blue moon”. (The Moon will not appear to be a blue color, though, cool as that would be. More on that in a bit.) What you may not know is that this term dates back only to 1946, and is actually a mistake.

According to Sky and Telescope, a premiere astronomy magazine (check your local library!), the writer James Hugh Pruett made an incorrect assumption about the use of the term “blue moon” in his March 1946 article. His source was the Maine Farmers’ Almanac, but he misinterpreted it. The almanac used “blue moon” to refer to the rare occasion when four full Moons happen in one season, when there are usually only three. By the almanac’s standards, tonight’s full moon is not a blue moon (though there will be one on August 21, 2013).

However, even that definition of “blue moon” apparently only dates to the early 19th century. In its colloquial, non-astronomical sense, a “blue moon” is something that rarely or never happens: like the Moon appearing blue. The Moon is white and gray when it’s high in the sky, and can appear very red, orange, or yellow near the horizon for the same reason the Sun does. As far as I can tell, the only time the Moon appears blue is when there’s a lot of volcanic ash in the air, also a rare event (thankfully) for most of the world. The popular song “Blue Moon” (written by everyone’s favorite gay misanthrope, Lorenz Hart) uses “blue” to mean sad, rather than rare.

I’m perfectly happy to keep the common mistaken usage of “blue moon” around, though, since it’s not really a big deal to me. Call tonight’s full Moon a blue moon, and I’ll back you up. However, because it’s me, let’s talk about the Moon and the Sun and why this stuff is kind of arbitrary.

The Moon and the Sun Don’t Get Along

The calendar used in much of the world is the Gregorian calendar, named for Pope Gregory XIII, who instituted it in 1582. The Gregorian calendar, in turn, was based on the older Roman calendar (known as the Julian calendar, for famous pinup girl Julie Callender Julius Caesar). The Romans’ calendar was based on the Sun: a year is the length of time for the Sun to return to the same spot in the sky. This length of time is approximate 365.25 days, which is why there’s a leap year every four years. (Experts know I’m simplifying; if you want more information, see this post at Galileo’s Pendulum.)

A problem arises when you try to break the year into smaller pieces. Traditionally, this has been done through reference to the Moon’s phases. The time to cycle through all the phases of the Moon is called a lunation, which is about 29 days, 12 hours, 44 minutes, and 3 seconds long. You don’t need to pull out a calculator to realize that a lunation doesn’t divide into a year evenly, but it’s still a reasonable way to mark the passage of time within a year, so it’s the foundation of the month (or moonth).

Many calendars—the traditional Chinese calendar, the Jewish calendar, and others—define the month based on a lunation, but don’t fix the number of months in a year. That means some years have 12 months, and others have 13: a leap month. It also means that holidays in these calendars move relative to the Gregorian calendar, such that Yom Kippur or the Chinese New Year don’t fall on the same date in 2012 that they did in 2011. (The Christian religious calendar combines aspects of the Jewish and the Gregorian calendars: Christmas is always December 25, but Easter and associated holidays are tied to Passover—which is coupled to the first full Moon after the spring equinox, and so can occur in a variety of dates in March and April.)

Another resolution to the problem of lunations vs. Sun is to ignore the Sun; this is what the Islamic calendar does. Months are defined by lunations, and the year is precisely 12 months, meaning the year in this calendar is 354 or 355 days long. This is why the holy month of Ramadan moves throughout the Gregorian year, happening sometimes in summer, and sometimes in winter.

The Gregorian calendar does things oppositely to the Islamic calendar: while months are defined, they are not based on a lunation at all. Months may be 30 days long (roughly one lunation), 31 days, or 28 days; the latter two options make no astronomical sense at all. Solar-only calendars have some advantages: since seasons are defined relative to the Sun, the equinoxes and solstices happen roughly on the same date every year, which doesn’t happen in lunation-based calendars. It’s all a matter of taste, culture, and convenience, however, since the cycles of Sun and the Moon don’t cooperate with the length of the day on Earth, or with each other.

Blue moons in the common post-1946 usage never happen in lunation-based calendar systems because by definition each phase of the Moon only occurs once in a month. On the other hand, the version from the Maine Farmers’ Almanac is relevant to any calendar system, because it’s defined by the seasons. As I wrote in my earlier DXS post, seasons are defined by the orbit of Earth around the Sun, and the relative orientation of Earth’s axis. Thus, summer is the same number of days whatever calendar system you use, even though it may not always be the same number of months. In a typical season, there will be three full Moons, but because of the mismatch between lunations and the time between equinoxes and solstices, some rare seasons may have four full Moons.

The Moon and Sun have provided patterns for human life and culture, metaphors for poetry and drama, and of course lots of superstition and pseudoscience. However, one thing most people can agree upon: the full Moon, blue or not, is a thing of beauty. If you can, go out tonight and have a look at it—and give it a wink in honor of the first human to set foot on it, Neil Armstrong.