Hormonal birth control explainer: a matter of health

Politics often interferes where it has no natural business, and one of those places is the discussion among a teenager, her parents, and her doctor or between a woman and her doctor about the best choices for health. The hottest button politics is pushing right now takes the form of a tiny hormone-containing pill known popularly as the birth control pill or, simply, The Pill. This hormonal medication, when taken correctly (same time every day, every day), does indeed prevent pregnancy. But like just about any other medication, this one has multiple uses, the majority of them unrelated to pregnancy prevention.

But let’s start with pregnancy prevention first and get it out of the way. When I used to ask my students how these hormone pills work, they almost invariably answered, “By making your body think it is pregnant.” That’s not correct. We take advantage of our understanding of how our bodies regulate hormones not to mimic pregnancy, exactly, but instead to flatten out what we usually talk about as a hormone cycle. 

The Menstrual Cycle

In a hormonally cycling girl or woman, the brain talks to the ovaries and the ovaries send messages to the uterus and back to the brain. All this chat takes place via chemicals called hormones. In human females, the ovarian hormones are progesterone and estradiol, a type of estrogen, and the brain hormones are luteinizing hormoneand follicle-stimulating hormone. The levels of these four hormones drive what we think of as the menstrual cycle, which exists to prepare an egg for fertilization and to make the uterine lining ready to receive a fertilized egg, should it arrive. 

Fig. 1. Female reproductive anatomy. Credit: Jeanne Garbarino.
In the theoretical 28-day cycle, fertilization (fusion of sperm and egg), if it occurs, will happen about 14 days in, timed with ovulation, or release of the egg from the ovary into the Fallopian tube or oviduct (see video–watch for the tiny egg–and Figure 1). The fertilized egg will immediately start dividing, and a ball of cells (called a blastocyst) that ultimately develops is expected to arrive at the uterus a few days later.
If the ball of cells shows up and implants in the uterine wall, the ovary continues producing progesterone to keep that fluffy, welcoming uterine lining in place. If nothing shows up, the ovaries drop output of estradiol and progesterone so that the uterus releases its lining of cells (which girls and women recognize as their “period”), and the cycle starts all over again.


A typical cycle

The typical cycle (which almost no girl or woman seems to have) begins on day 1 when a girl or woman starts her “period.” This bleeding is the shedding of the uterine lining, a letting go of tissue because the ovaries have bottomed out production of the hormones that keep the tissue intact. During this time, the brain and ovaries are in communication. In the first two weeks of the cycle, called the “follicular phase” (see Figure 2), an ovary has the job of promoting an egg to mature. The egg is protected inside a follicle that spends about 14 days reaching maturity. During this time, the ovary produces estrogen at increasing levels, which causes thickening of the uterine lining, until the estradiol hits a peak about midway through the cycle. This spike sends a hormone signal to the brain, which responds with a hormone spike of its own.

Fig. 2. Top: Day of cycle and phases. Second row: Body temperature (at waking) through cycle.
Third row: Hormones and their levels. Fourth row: What the ovaries are doing.
Fifth row: What the uterus is doing. Via Wikimedia Commons
In the figure, you can see this spike as the red line indicating luteinizing hormone. A smaller spike of follicle-stimulating hormone (blue line), also from the brain, occurs simultaneously. These two hormones along with the estradiol peak result in the follicle expelling the egg from the ovary into the Fallopian tube, or oviduct (Figure 3, step 4). That’s ovulation.
Fun fact: Right when the estrogen spikes, a woman’s body temperature will typically drop a bit (see “Basal body temperature” in the figure), so many women have used temperature monitoring to know that ovulation is happening. Some women also may experience a phenomenon called mittelschmerz, a pain sensation on the side where ovulation is occurring; ovaries trade off follicle duties with each cycle.  

The window of time for a sperm to meet the egg is usually very short, about a day. Meanwhile, as the purple line in the “hormone level” section of Figure 2 shows, the ovary in question immediately begins pumping out progesterone, which maintains that proliferated uterine lining should a ball of dividing cells show up.
Fig. 3. Follicle cycle in the ovary. Steps 1-3, follicular phase, during
which the follicle matures with the egg inside. Step 4: Ovulation, followed by
the luteal phase. Step 5: Corpus luteum (yellow body) releases progesterone.
Step 6: corpus luteum degrades if no implantation in uterus occurs.
Via Wikimedia Commons.
The structure in the ovary responsible for this phase, the luteal phase, is the corpus luteum (“yellow body”; see Figure 3, step 5), which puts out progesterone for a couple of weeks after ovulation to keep the uterine lining in place. If nothing implants, the corpus luteum degenerates (Figure 3, step 6). If implantation takes place, this structure will (should) instead continue producing progesterone through the early weeks of pregnancy to ensure that the lining doesn’t shed.

How do hormones in a pill stop all of this?

The hormones from the brain–luteinizing hormone and follicle-stimulating hormone– spike because the brain gets signals from the ovarian hormones. When a girl or woman takes the pills, which contain synthetics of ovarian hormones, the hormone dose doesn’t peak that way. Instead, the pills expose the girl or woman to a flat daily dose of hormones (synthetic estradiol and synthetic progesterone) or hormone (synthetic progesterone only). Without these peaks (and valleys), the brain doesn’t release the hormones that trigger follicle maturation or ovulation. Without follicle maturation and ovulation, no egg will be present for fertilization.

Assorted hormonal pills. Via Wikimedia Commons.
Most prescriptions of hormone pills are for packets of 28 pills. Typically, seven of these pills–sometimes fewer–are “dummy pills.” During the time a woman takes these dummy pills, her body shows the signs of withdrawal from the hormones, usually as a fairly light bleeding for those days, known as “withdrawal bleeding.” With the lowest-dose pills, the uterine lining may proliferate very little, so that this bleeding can be quite light compared to what a woman might experience under natural hormone influences.

How important are hormonal interventions for birth control?

Every woman has a story to tell, and the stories about the importance of hormonal birth control are legion. My personal story is this: I have three children. With our last son, I had two transient ischemic attacks at the end of the pregnancy, tiny strokes resulting from high blood pressure in the pregnancy. I had to undergo an immediate induction. This was the second time I’d had this condition, called pre-eclampsia, having also had this with our first son. My OB-GYN told me under no uncertain terms that I could not–should not–get pregnant again, as a pregnancy could be life threatening.

But I’m married, happily. As my sister puts it, my husband and I “like each other.” We had to have a failsafe method of ensuring that I wouldn’t become pregnant and endanger my life. For several years, hormonal medication made that possible. After I began having cluster headaches and high blood pressure on this medication in my forties, my OB-GYN and I talked about options, and we ultimately turned to surgery to prevent pregnancy.

But surgery is almost always not reversible. For a younger woman, it’s not the temporary option that hormonal pills provide. Hormonal interventions also are available in other forms, including as a vaginal ring, intrauterine device (some are hormonal), and implants, all reversible.

                                            

One of the most important things a society can do for its own health is to ensure that women in that society have as much control as possible over their reproduction. Thanks to hormonal interventions, although I’ve been capable of childbearing for 30 years, I’ve had only three children in that time. The ability to control my childbearing has meant I’ve been able to focus on being the best woman, mother, friend, and partner I can be, not only for myself and my family, but as a contributor to society, as well.

What are other uses of hormonal interventions?

Heavy, painful, or irregular periods. Did you read that part about how flat hormone inputs can mean less build up of the uterine lining and thus less bleeding and a shorter period? Many girls and women who lack hormonal interventions experience bleeding so heavy that they become anemic. This kind of bleeding can take a girl or woman out of commission for days at a time, in addition to threatening her health. Pain and irregular bleeding also are disabling and negatively affect quality of life on a frequent basis. Taking a single pill each day can make it all better. 


Unfortunately, the current political climate can take this situation–especially for teenage girls–and cast it as a personal moral failing with implications that a girl who takes hormonal medications is a “slut,” rather than the real fact that this hormonal intervention is literally maintaining the regularity of her health.

For some context, imagine that a whenever a boy or man produced sperm, it was painful or caused extensive blood loss that resulted in anemia. Would there be any issues raised with providing a medication that successfully addressed this problem?

Polycystic ovarian syndrome. This syndrome is, at its core, an imbalance of the ovarian hormones that is associated with all kinds of problems, from acne to infertility to overweight to uterine cancer. Guess what balances those hormones back out? Yes. Hormonal medication, otherwise known as The Pill.  

Again, for some context, imagine that this syndrome affected testes instead of ovaries, and caused boys and men to become infertile, experience extreme pain in the testes, gain weight, be at risk for diabetes, and lose their hair. Would there be an issue with providing appropriate hormonal medication to address this problem?

Acne. I had a friend in high school who was on hormonal medication, not because she was sexually active (she was not) but because she struggled for years with acne. This is an FDA-approved use of this medication.

Are there health benefits of hormonal interventions?

In a word, yes. They can protect against certain cancers, including ovarian and endometrial, or uterine, cancer. Women die from these cancers, and this protection is not negligible. They may also help protect against osteoporosis, or bone loss. In cases like mine, they protect against a potentially life-threatening pregnancy.

Speaking of pregnancy, access to contraception is “the only reliable way” to reduce unwanted pregnancies and abortion rates [PDF]. Pregnancy itself is far more threatening to a girl’s (in particular) or woman’s health than hormonal contraception.

Are there health risks with hormonal interventions?

Yes. No medical intervention is without risk. In the case of hormonal interventions, lifestyle habits such as smoking can enhance risk for high blood pressure and blood clots. Age can be a factor, although–as I can attest–women no longer have to stop taking hormonal interventions after age 35 as long as they are nonsmokers and blood pressure is normal. These interventions have been associated with a decrease in some cancers, as I’ve noted, but also with an increase in others, such as liver cancer, over the long term. The effect on breast cancer risk is mixed and may have to do with how long taking the medication delays childbearing. ETA: PLoS Medicine just published a paper (open access) addressing the effects of hormonal interventions on cancer risk.
———————————————————
By Emily Willingham, DXS Managing Editor
Opinions expressed in this piece are my own and do not necessarily reflect the opinions of all DXS editors or contributors.

Anorexia nervosa, neurobiology, and family-based treatment

Via Wikimedia Commons
Photo credit: Sandra Mann
By Harriet Brown, DXS contributor

Back in 1978, psychoanalyst Hilde Bruch published the first popular book on anorexia nervosa. In The Golden Cage, she described anorexia as a psychological illness caused by environmental factors: sexual abuse, over-controlling parents, fears about growing up, and/or other psychodynamic factors. Bruch believed young patients needed to be separated from their families (a concept that became known as a “parentectomy”) so therapists could help them work through the root issues underlying the illness. Then, and only then, patients would choose to resume eating. If they were still alive.

Bruch’s observations dictated eating-disorders treatments for decades, treatments that led to spectacularly ineffective results. Only about 35% of people with anorexia recovered; another 20% died, of starvation or suicide; and the rest lived with some level of chronic illness for the rest of their lives.

Not a great track record, overall, and especially devastating for women, who suffer from anorexia at a rate of 10 times that of men. Luckily, we know a lot more about anorexia and other eating disorders now than we did in 1978.

“It’s Not About the Food”

In Bruch’s day, anorexia wasn’t the only illness attributed to faulty parenting and/or trauma. Therapists saw depression, anxiety, schizophrenia, eating disorders, and homosexuality (long considered a psychiatric “illness”) as ailments of the mind alone. Thanks to the rising field of behavioral neuroscience, we’ve begun to untangle the ways brain circuitry, neural architecture, and other biological processes contribute to these disorders. Most experts now agree that depression and anxiety can be caused by, say, neurotransmitter imbalances as much as unresolved emotional conflicts, and treat them accordingly. But the field of eating-disorders treatment has been slow to jump on the neurobiology bandwagon. When my daughter was diagnosed with anorexia in 2005, for instance, we were told to find her a therapist and try to get our daughter to eat “without being the food police,” because, as one therapist informed us, “It’s not about the food.”

Actually, it is about the food. Especially when you’re starving.

Ancel Keys’ 1950 Semi-Starvation Study tracked the effects of starvation and subsequent re-feeding on 36 healthy young men, all conscientious objectors who volunteered for the experiment. Keys was drawn to the subject during World War II, when millions in war-torn Europe – especially those in concentration camps – starved for years. One of Keys’ most interesting findings was that starvation itself, followed by re-feeding after a period of prolonged starvation, produced both physical and psychological symptoms, including depression, preoccupation with weight and body image, anxiety, and obsessions with food, eating, and cooking—all symptoms we now associate with anorexia. Re-feeding the volunteers eventuallyreversed most of the symptoms. However, this approach proved to be difficult on a psychological level, and in some ways more difficult than the starvation period. These results were a clear illustration of just how profound the effects of months of starvation were on the body and mind.

Alas, Keys’ findings were pretty much ignored by the field of eating-disorders treatment for 40-some years, until new technologies like functional magnetic resonance imaging (fMRI) and research gave new context to his work. We now know there is no single root cause for eating disorders. They’re what researchers call multi-factorial, triggered by a perfect storm of factors that probably differs for each person who develops an eating disorder. “Personality characteristics, the environment you live in, your genetic makeup—it’s like a cake recipe,” says Daniel le Grange, Ph.D., director of the Eating Disorders Program at the University of Chicago. “All the ingredients have to be there for that person to develop anorexia.”

One of those ingredients is genetics. Twenty years ago, the Price Foundation sponsored a project that collected DNA samples from thousands of people with eating disorders, their families, and control participants. That data, along with information from the 2006 Swedish Twin Study, suggests that anorexia is highly heritable. “Genes play a substantial role in liability to this illness,” says Cindy Bulik, Ph.D., a professor of psychiatry and director of the University of North Carolina’s Eating Disorders Program. And while no one has yet found a specific anorexia gene, researchers are focusing on an area of chromosome 1 that shows important gene linkages.

Certain personality traits associated with anorexia are probably heritable as well. “Anxiety, inhibition, obsessionality, and perfectionism seem to be present in families of people with an eating disorder,” explains Walter Kaye, M.D., who directs the Eating Disorders Treatment and Research Program at the University of California-San Diego. Another ingredient is neurobiology—literally, the way your brain is structured and how it works. Dr. Kaye’s team at UCSD uses fMRI technology to map blood flow in people’s brains as they think of or perform a task. In one study, Kaye and his colleagues looked at the brains of people with anorexia, people recovered from anorexia, and people who’d never had an eating disorder as they played a gambling game. Participants were asked to guess a number and were rewarded for correct guesses with money or “punished” for incorrect or no guesses by losing money.

Participants in the control group responded to wins and losses by “living in the moment,” wrote researchers: “That is, they made a guess and then moved on to the next task.” But people with anorexia, as well as people who’d recovered from anorexia, showed greater blood flow to the dorsal caudate, an area of the brain that helps link actions and their outcomes, as well as differences in their brains’ dopamine pathways. “People with anorexia nervosa do not live in the moment,” concluded Kaye. “They tend to have exaggerated and obsessive worry about the consequences of their behaviors, looking for rules when there are none, and they are overly concerned about making mistakes.” This study was the first to show altered pathways in the brain even in those recovered from anorexia, suggesting that inherent differences in the brain’s architecture and signaling systems help trigger the illness in the first place.

Food Is Medicine

Some of the best news to come out of research on anorexia is a new therapy aimed at kids and teens. Family-based treatment (FBT), also known as the Maudsley approach, was developed at the Maudsley Hospital in London by Ivan Eisler and Christopher Dare, family therapists who watched nurses on the inpatient eating-disorders unit get patients to eat by sitting with them, talking to them, rubbing their backs, and supporting them. Eisler and Dare wondered how that kind of effective encouragement could be used outside the hospital.

Their observations led them to develop family-based treatment, or FBT, a three-phase treatment for teens and young adults that sidesteps the debate on etiology and focuses instead on recovery. “FBT is agnostic on cause,” says Dr. Le Grange. During phase one, families (usually parents) take charge of a child’s eating, with a goal of fully restoring weight (rather than get to the “90 percent of ideal body weight” many programs use as a benchmark). In phase two, families gradually transfer responsibility for eating back to the teen. Phase three addresses other problems or issues related to normal adolescent development, if there are any.

FBT is a pragmatic approach that recognizes that while people with anorexia are in the throes of acute malnourishment, they can’t choose to eat. And that represents one of the biggest shifts in thinking about eating disorders. The DSM-IV, the most recent “bible” of psychiatric treatment, lists as the first symptom of anorexia “a refusal to maintain body weight at or above a minimally normal weight for age and height.” That notion of refusal is key to how anorexia has been seen, and treated, in the past: as a refusal to eat or gain weight. An acting out. A choice. Which makes sense within the psychodynamic model of cause.

But it doesn’t jibe with the research, which suggests that anorexia is more of an inability to eat than a refusal. Forty-five years ago, Aryeh Routtenberg, then (and still) a professor of psychology at Northwestern University, discovered that when he gave rats only brief daily access to food but let them run as much as they wanted on wheels, they would gradually eat less and less, and run more and more. In fact, they would run without eating until they died, a paradigm Routtenberg called activity-based anorexia (ABA). Rats with ABA seemed to be in the grip of a profound physiological imbalance, one that overrode the normal biological imperatives of hunger and self-preservation. ABA in rats suggests that however it starts, once the cycle of restricting and/or compulsive exercising passes a certain threshold, it takes on a life of its own. Self-starvation is no longer (if it ever was) a choice, but a compulsion to the death.

That’s part of the thinking in FBT. Food is the best medicine for people with anorexia, but they can’t choose to eat. They need someone else to make that choice for them. Therapists don’t sit at the table with patients, but parents do. And parents love and know their children. Like the nurses at the Maudsley Hospital, they find ways to get kids to eat. In a sense, what parents do is outshout the anorexia “voice” many sufferers report hearing, a voice in their heads that tells them not to eat and berates them when they do. Parents take the responsibility for making the choice to eat away from the sufferer, who may insist she’s choosing not to eat but who, underneath the illness, is terrified and hungry.

The best aspect of FBT is that it works. Not for everyone, but for the majority of kids and teens. Several randomized controlled studies of FBT and “treatment as usual” (talk therapy without pressure to eat) show recovery rates of 80 to 90 percent with FBT—a huge improvement over previous recovery rates. A study at the University of Chicago is looking at adapting the treatment for young adults; early results are promising.

The most challenging aspect of FBT is that it’s hard to find. Relatively few therapists in the U.S. are trained in the approach. When our daughter got sick, my husband and I couldn’t find a local FBT therapist. So we cobbled together a team that included our pediatrician, a therapist, and lots of friends who supported our family through the grueling work of re-feeding our daughter. Today she’s a healthy college student with friends, a boyfriend, career goals, and a good relationship with us.

A few years ago, Dr. Le Grange and his research partner, Dr. James Lock of Stanford, created a training institute that certifies a handful of FBT therapists each year. (For a list of FBT providers, visit the Maudsley Parents website.) It’s a start. But therapists are notoriously slow to adopt new treatments, and FBT is no exception. Some therapists find FBT controversial because it upends the conventional view of eating disorders and treatments. Some cling to the psychodynamic view of eating disorders despite the lack of evidence. Still, many in the field have at least heard of FBT and Kaye’s neurobiological findings, even if they don’t believe in them yet.

Change comes slowly. But it comes.

* * *

Harriet Brown teaches magazine journalism at the S.I. Newhouse School of Public Communications in Syracuse, New York. Her latest book is Brave Girl Eating: A Family’s Struggle with Anorexia (William Morrow, 2010).

be there for that person to develop anorexia.”

One of those ingredients is genetics. Twenty years ago, the Price Foundation sponsored a project that collected DNA samples from thousands of people with eating disorders, their families, and control participants. That data, along with information from the 2006 Swedish Twin Study, suggests that anorexia is highly heritable. “Genes play a substantial role in liability to this illness,” says Cindy Bulik, Ph.D., a professor of psychiatry and director of the University of North Carolina’s Eating Disorders Program. And while no one has yet found a specific anorexia gene, researchers are focusing on an area of chromosome 1 that shows important gene linkages.
Certain personality traits associated with anorexia are probably heritable as well. “Anxiety, inhibition, obsessionality, and perfectionism seem to be present in families of people with an eating disorder,” explains Walter Kaye, M.D., who directs the Eating Disorders Treatment and Research Program at the University of California-San Diego. Another ingredient is neurobiology—literally, the way your brain is structured and how it works. Dr. Kaye’s team at UCSD uses fMRI technology to map blood flow in people’s brains as they think of or perform a task. In one study, Kaye and his colleagues looked at the brains of people with anorexia, people recovered from anorexia, and people who’d never had an eating disorder as they played a gambling game. Participants were asked to guess a number and were rewarded for correct guesses with money or “punished” for incorrect or no guesses by losing money.
Participants in the control group responded to wins and losses by “living in the moment,” wrote researchers: “That is, they made a guess and then moved on to the next task.” But people with anorexia, as well as people who’d recovered from anorexia, showed greater blood flow to the dorsal caudate, an area of the brain that helps link actions and their outcomes, as well as differences in their brains’ dopamine pathways. “People with anorexia nervosa do not live in the moment,” concluded Kaye. “They tend to have exaggerated and obsessive worry about the consequences of their behaviors, looking for rules when there are none, and they are overly concerned about making mistakes.” This study was the first to show altered pathways in the brain even in those recovered from anorexia, suggesting that inherent differences in the brain’s architecture and signaling systems help trigger the illness in the first place.
Food Is Medicine
Some of the best news to come out of research on anorexia is a new therapy aimed at kids and teens. Family-based treatment (FBT), also known as the Maudsley approach, was developed at the Maudsley Hospital in London by Ivan Eisler and Christopher Dare, family therapists who watched nurses on the inpatient eating-disorders unit get patients to eat by sitting with them, talking to them, rubbing their backs, and supporting them. Eisler and Dare wondered how that kind of effective encouragement could be used outside the hospital.
Their observations led them to develop family-based treatment, or FBT, a three-phase treatment for teens and young adults that sidesteps the debate on etiology and focuses instead on recovery. “FBT is agnostic on cause,” says Dr. Le Grange. During phase one, families (usually parents) take charge of a child’s eating, with a goal of fully restoring weight (rather than get to the “90 percent of ideal body weight” many programs use as a benchmark). In phase two, families gradually transfer responsibility for eating back to the teen. Phase three addresses other problems or issues related to normal adolescent development, if there are any.
FBT is a pragmatic approach that recognizes that while people with anorexia are in the throes of acute malnourishment, they can’t choose to eat. And that represents one of the biggest shifts in thinking about eating disorders. The DSM-IV, the most recent “bible” of psychiatric treatment, lists as the first symptom of anorexia “a refusal to maintain body weight at or above a minimally normal weight for age and height.” That notion of refusal is key to how anorexia has been seen, and treated, in the past: as a refusal to eat or gain weight. An acting out. A choice. Which makes sense within the psychodynamic model of cause.
But it doesn’t jibe with the research, which suggests that anorexia is more of an inability to eat than a refusal. Forty-five years ago, Aryeh Routtenberg, then (and still) a professor of psychology at Northwestern University, discovered that when he gave rats only brief daily access to food but let them run as much as they wanted on wheels, they would gradually eat less and less, and run more and more. In fact, they would run without eating until they died, a paradigm Routtenberg called activity-based anorexia (ABA). Rats with ABA seemed to be in the grip of a profound physiological imbalance, one that overrode the normal biological imperatives of hunger and self-preservation. ABA in rats suggests that however it starts, once the cycle of restricting and/or compulsive exercising passes a certain threshold, it takes on a life of its own. Self-starvation is no longer (if it ever was) a choice, but a compulsion to the death.
That’s part of the thinking in FBT. Food is the best medicine for people with anorexia, but they can’t choose to eat. They need someone else to make that choice for them. Therapists don’t sit at the table with patients, but parents do. And parents love and know their children. Like the nurses at the Maudsley Hospital, they find ways to get kids to eat. In a sense, what parents do is outshout the anorexia “voice” many sufferers report hearing, a voice in their heads that tells them not to eat and berates them when they do. Parents take the responsibility for making the choice to eat away from the sufferer, who may insist she’s choosing not to eat but who, underneath the illness, is terrified and hungry.
The best aspect of FBT is that it works. Not for everyone, but for the majority of kids and teens. Several randomized controlled studies of FBT and “treatment as usual” (talk therapy without pressure to eat) show recovery rates of 80 to 90 percent with FBT—a huge improvement over previous recovery rates. A study at the University of Chicago is looking at adapting the treatment for young adults; early results are promising.
The most challenging aspect of FBT is that it’s hard to find. Relatively few therapists in the U.S. are trained in the approach. When our daughter got sick, my husband and I couldn’t find a local FBT therapist. So we cobbled together a team that included our pediatrician, a therapist, and lots of friends who supported our family through the grueling work of re-feeding our daughter. Today she’s a healthy college student with friends, a boyfriend, career goals, and a good relationship with us.
A few years ago, Dr. Le Grange and his research partner, Dr. James Lock of Stanford, created a training institute that certifies a handful of FBT therapists each year. (For a list of FBT providers, visit the Maudsley Parents website.) It’s a start. But therapists are notoriously slow to adopt new treatments, and FBT is no exception. Some therapists find FBT controversial because it upends the conventional view of eating disorders and treatments. Some cling to the psychodynamic view of eating disorders despite the lack of evidence. Still, many in the field have at least heard of FBT and Kaye’s neurobiological findings, even if they don’t believe in them yet.
Change comes slowly. But it comes.
* * *
Harriet Brown teaches magazine journalism at the S.I. Newhouse School of Public Communications in Syracuse, New York. Her latest book is Brave Girl Eating: A Family’s Struggle with Anorexia (William Morrow, 2010).

Biology Explainer: The big 4 building blocks of life–carbohydrates, fats, proteins, and nucleic acids

The short version
  • The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
  • Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
  • Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
  • Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.                                                                                                      
  • The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.

                                                  

Big Molecules with Small Building Blocks

The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.

We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.
Carbohydrates

You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.

When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.

Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.

The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.

Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.

On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.

The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!

If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.

The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?

If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.

In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.

Sugar and Fuel

A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.

Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.

Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.

Polysaccharides: Fuel and Form

Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.

Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.

Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.

Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.

The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.

Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.

The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.

That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.

These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.

Lipids: The Fatty Trifecta

Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.

Fats: the Good, the Bad, the Neutral

Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?

Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows.  Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.

Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.

Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.

Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.

The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.

You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.

In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.

A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.

Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.

Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.

Phospholipids: An Abundant Fat

You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.

Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.

There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.

Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.

The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.

Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.

Steroids: Here to Pump You Up?

Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.

But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.

Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.

Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.

Proteins

As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.

Levels of Structure

Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.

For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.

This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.

Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.

The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.

In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.

A Plethora of Purposes

What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.

As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.

Nucleic Acids

How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.

Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.

DNA vs. RNA: A Matter of Structure

DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.

So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.

RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.

DNA vs. RNA: Function Wars

An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.

These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.

RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.


 By Emily Willingham, DXS managing editor 
This material originally appeared in similar form in Emily Willingham’s Complete Idiot’s Guide to College Biology

Double Xplainer: Once in a Blue Moon

Full Moon, from Flickr user Proggie under
Creative Commons license.
Tonight—August 31, 2012— is the second full Moon of August. The last time two full Moons occurred in the same month was in 2010, and the next will be in 2015, so while the events are rare, they aren’t terribly uncommon either. In fact, you’ve probably heard the second full Moon given a name: “blue moon”. (The Moon will not appear to be a blue color, though, cool as that would be. More on that in a bit.) What you may not know is that this term dates back only to 1946, and is actually a mistake.

According to Sky and Telescope, a premiere astronomy magazine (check your local library!), the writer James Hugh Pruett made an incorrect assumption about the use of the term “blue moon” in his March 1946 article. His source was the Maine Farmers’ Almanac, but he misinterpreted it. The almanac used “blue moon” to refer to the rare occasion when four full Moons happen in one season, when there are usually only three. By the almanac’s standards, tonight’s full moon is not a blue moon (though there will be one on August 21, 2013).

However, even that definition of “blue moon” apparently only dates to the early 19th century. In its colloquial, non-astronomical sense, a “blue moon” is something that rarely or never happens: like the Moon appearing blue. The Moon is white and gray when it’s high in the sky, and can appear very red, orange, or yellow near the horizon for the same reason the Sun does. As far as I can tell, the only time the Moon appears blue is when there’s a lot of volcanic ash in the air, also a rare event (thankfully) for most of the world. The popular song “Blue Moon” (written by everyone’s favorite gay misanthrope, Lorenz Hart) uses “blue” to mean sad, rather than rare.

I’m perfectly happy to keep the common mistaken usage of “blue moon” around, though, since it’s not really a big deal to me. Call tonight’s full Moon a blue moon, and I’ll back you up. However, because it’s me, let’s talk about the Moon and the Sun and why this stuff is kind of arbitrary.

The Moon and the Sun Don’t Get Along

The calendar used in much of the world is the Gregorian calendar, named for Pope Gregory XIII, who instituted it in 1582. The Gregorian calendar, in turn, was based on the older Roman calendar (known as the Julian calendar, for famous pinup girl Julie Callender Julius Caesar). The Romans’ calendar was based on the Sun: a year is the length of time for the Sun to return to the same spot in the sky. This length of time is approximate 365.25 days, which is why there’s a leap year every four years. (Experts know I’m simplifying; if you want more information, see this post at Galileo’s Pendulum.)

A problem arises when you try to break the year into smaller pieces. Traditionally, this has been done through reference to the Moon’s phases. The time to cycle through all the phases of the Moon is called a lunation, which is about 29 days, 12 hours, 44 minutes, and 3 seconds long. You don’t need to pull out a calculator to realize that a lunation doesn’t divide into a year evenly, but it’s still a reasonable way to mark the passage of time within a year, so it’s the foundation of the month (or moonth).

Many calendars—the traditional Chinese calendar, the Jewish calendar, and others—define the month based on a lunation, but don’t fix the number of months in a year. That means some years have 12 months, and others have 13: a leap month. It also means that holidays in these calendars move relative to the Gregorian calendar, such that Yom Kippur or the Chinese New Year don’t fall on the same date in 2012 that they did in 2011. (The Christian religious calendar combines aspects of the Jewish and the Gregorian calendars: Christmas is always December 25, but Easter and associated holidays are tied to Passover—which is coupled to the first full Moon after the spring equinox, and so can occur in a variety of dates in March and April.)

Another resolution to the problem of lunations vs. Sun is to ignore the Sun; this is what the Islamic calendar does. Months are defined by lunations, and the year is precisely 12 months, meaning the year in this calendar is 354 or 355 days long. This is why the holy month of Ramadan moves throughout the Gregorian year, happening sometimes in summer, and sometimes in winter.

The Gregorian calendar does things oppositely to the Islamic calendar: while months are defined, they are not based on a lunation at all. Months may be 30 days long (roughly one lunation), 31 days, or 28 days; the latter two options make no astronomical sense at all. Solar-only calendars have some advantages: since seasons are defined relative to the Sun, the equinoxes and solstices happen roughly on the same date every year, which doesn’t happen in lunation-based calendars. It’s all a matter of taste, culture, and convenience, however, since the cycles of Sun and the Moon don’t cooperate with the length of the day on Earth, or with each other.

Blue moons in the common post-1946 usage never happen in lunation-based calendar systems because by definition each phase of the Moon only occurs once in a month. On the other hand, the version from the Maine Farmers’ Almanac is relevant to any calendar system, because it’s defined by the seasons. As I wrote in my earlier DXS post, seasons are defined by the orbit of Earth around the Sun, and the relative orientation of Earth’s axis. Thus, summer is the same number of days whatever calendar system you use, even though it may not always be the same number of months. In a typical season, there will be three full Moons, but because of the mismatch between lunations and the time between equinoxes and solstices, some rare seasons may have four full Moons.

The Moon and Sun have provided patterns for human life and culture, metaphors for poetry and drama, and of course lots of superstition and pseudoscience. However, one thing most people can agree upon: the full Moon, blue or not, is a thing of beauty. If you can, go out tonight and have a look at it—and give it a wink in honor of the first human to set foot on it, Neil Armstrong.

Why blueberries won’t turn you blue and other blueberry facts

Blueberries. Credit.


by Adrienne Roehrich, Chemistry Editor

Blueberries in the Northwestern semisphere are the fruit of several shrubs in the genus Vaccinium L.  They grow in all provinces in Canada and all but two of the United States (Nebraska and North Dakota). In the Northwestern semisphere, one can find 43 species of blueberries, depending on the region. Blueberries are found and produced in all hemispheres of the world. However, the species can vary by region.

Taxonomy:
Kingdom: Plantae (Plants)
Subkingdom: Tracheobionta (Vascular plants)
Subdivision: Spermatophyta (Seed plants)
Division: Magnoliophyta (Flowering plants)
Class: Magnoliopsida (Dictyledons)
Subclass: Dilleniidae
Order: Ericales
Family: Ericaceae
Genus: Vaccinium

There are 43 species and 46 accepted taxa overall. Some of the species include fruits we do not necessarily recognize as blueberry, including farkleberry, bilberry, ohelo, cranberry, huckleberry, whortleberry, deer berry, and lingonberry.  (Source

Blueberries are a very popular fruit in the U.S., and is consumed in fresh, frozen, and canned forms. While blueberries are a great fruit to eat to meet your suggested fruit intake, it also is one of the foods that are purported to have properties that it just does not have. This undeserved reputation results from the high levels of anti-oxidants, leading those predisposed to looking for “super foods” to classify blueberries into the anti-oxidant super food category. While eating more healthy foods is always a good idea, no food has curative effects all on its own.

Other aspects of blueberry nutrition includes it as a source of sugar. One cup (148 g) of blueberries contains about 15 g of sugar and 4 g of fiber, a single gram of protein, and half a gram of fat. If you are counting carbs, this cup has 21 g of them. That one cup of blueberries averages about 85 calories, which is approximately the same as a medium apple or orange. While almost all the vitamins and minerals nutrition gurus like to report on are present to some amount, for the 2000-calorie diet, that one cup of blueberries will provide the recommended daily value of 24% of Vitamin C, 36% of Vitamin K, and 25% of manganese. The remaining values range from 0-4%. (Values obtained from Nutrition.com and verified through multiple sources.)

The Wikipedia entry is quite good and well researched (as of August 18, 2012). 

The photo above shows all of the life stages of a blueberry. Berries go from the little red nub at the end of the branch to round and juicy blueberries through fertilization of the ovary, which swells rapidly for about a month, then its growth ceases. The green berry develops with no change in size. The chemicals responsible for the blue color, anthocyanins, begin to turn the berry from green to blue as it develops over about 6 days. The volume of the berry increases during the change in color phase.

Will blueberries turn you blue? In short, no. You can achieve blue skin through the ill-advised practice of drinking silver or you can achieve orangish-yellow skin by eating a large number of carrots. This is because the chemicals causing the skin color are fat soluble and are present in a large quantity in the fat just under the skin, giving the skin those colors. Anthocyanin, the primary chemical causing the blue color in blueberries, is not fat soluble and will not reside in the fat under your skin.

Anthocyanins is a class of over 30 compounds. The chemical structure is generally as shown below. They are polyphenolic, which indicates the 3 ring structures. The “R” indicates different functional groups that change depending on which anthocyanin the structure represents. 


Interestingly, anthocyanins are also pH indicators because their color ranges from yellow to red to blue depending on the local pH. The blue color indicates a neutral pH. The wikipedia page on anthocyanins is also informative (as of August 18, 2012). 

As mentioned before, blueberries are a popular fruit. Recipes abound, but here is one from my own Recipe Codex for Surprise Muffins with blueberries:

Ingredients
  • 6 Tbsp. butter
  • 3/4 cup sugar
  • 2 eggs
  • 1/2 cup milk
  • 1/2 – 1 pint blueberries, fresh or frozen (defrosted)
  • Food coloring, optional
  • 2 cups all-purpose flour
  • 1/4 tsp. salt
  • 1 Tbsp. baking powder
  • Your favorite mini-treat (Hershey’s Kisses, Hugs, Reese’s Mini Cups, strawberry jam, etc.)
Directions
  1. Preheat the oven to 350º. In a large bowl, cream the butter and sugar. You can use a wooden spoon, a potato masher or handheld electric mixer. Mix in the eggs, one at a time, and add the milk.
  2. Rinse the strawberries and cut off the green stem. Mash the berries with a potato masher or puree in a blender. Then stir the berries into the butter and milk mixture. TIP: For muffins with a more blue color, add a few drops of blue food coloring.
  3. In a separate bowl, sift the flour, salt and baking powder. Stir well. Add the flour mixture to the berry mixture. Use a wooden spoon to stir until all the white disappears.
  4. Line the muffin tin with paper liners. Drop the batter from a tablespoon to fill the cups halfway.
  5. Add a surprise: an unwrapped mini treat or 1/2 teaspoon of jam. Then spoon more batter to fill almost to the top.
  6. Bake until the muffins begin to brown and a toothpick inserted near the center (but not in the mini-treat) comes out clean, about 20-25 minutes.
  7. Remove the muffins from the tin and cool.
Or perhaps you are in less of a cooking scientist mood and more in a home lab mood. Try this at-home lab with blueberries about dyes. Adapted from the Journal of Chemical Education.

Items You Need
  • 4 microwavable/stove top staff glasses, pots, or containers at least 1/2 cup in volume
  • tablespoons or 1/4 cup measuring cup
  • water
  • spatula
  • alum (available in the grocery store spice aisle)
  • cream of tartar (available in the grocery store spice aisle)
  • hot pads and tongs
  • at least four small (1-2 in.) squares of white cotton cloth
  • yellow onion skins
  • blueberries
  • spoon
  • paper towels
  • vinegar
  • baking soda
  • a dropper
  • notebook for experimental observations
Procedure
In each step, you will want to record your observations, paying special attention to colors.
  1. Pour 4 tablespoons (1/4 cup) into container 1. Add a pea-sized scoop of alum and about half that amount of cream of tartar and stir. Bring the solution to a boil on the stove top or by microwaving for about 60 seconds. (Your microwave may vary.) Add two small squares of white cotton cloth and boil for two minutes. Set the container aside. The squares will be used in steps 4 and 6.
  2. Tear the outer, papery skin from a yellow onion into pieces no more than 1 inch square. Place enough pieces in a second container to cover its bottom with  2 or 3 layers of onion skin. Add about 4 tablespoons of water to the container. Bring the solution to a boil on the stove top, continuing to boil for 5  minutes.
  3. Wet a new square of cloth with water. Place it in container 2 so it is completely submerged and boil for 1 minute. Using tongs, remove the cloth and rinse it with water. Place the cloth square in the appropriate area on a labeled paper towel.
  4. Use tongs to remove one of the cloth squares from beaker 1. Repeat step 3 using this square. Compare to the dyed cloth square from step 3.
  5. Pour 4 tablespoons of water in a third container. Add 4-5 blueberries to the container and mash them with a spoon. Bring the solution to a boil on the stove, and continue to boil for 5 minutes.
  6. Repeat steps 3 and 4 substituting the blueberry mixture in container 3 for the onion skin mixture in container 2.
  7. Mix a small scoop of baking soda with a tsp of water in a clean container. With a dropper, place 1-2 drops of the baking soda solution in one corner of each cloth square. What happens? Rinse the dropper thoroughly, then place 1-2 drops of vinegar on the opposite corner of each square. What happens? Rinse the fabric squares under cool running water. Is there a change? Allow the squares to dry overnight. Is there any change of the cloth dries?
Optional: Try variations in the procedure such as changing the amount of dye source, the length of time the cloth spends in the dye solution, and the temperature of the dye solution.

Questions to consider
The solution in step 1 is called a mordant. Based on your observations, what is the purpose of a mordant?
Is the dye produced by blueberries really blue? Why might some people not want to wear clothes dyed with blueberries?

———————-
All in all, enjoy your blueberries. As a shrub, it is quite pretty. As a fruit, it is quite yummy. And as the tool in an experiment, it is quite fun.

These views are the opinion of the author and do not necessarily reflect or disagree with those of the DXS editorial team.

Book Review: Science Myths Unmasked: Exposing the misconceptions and counterfeits forged by bad science books


 

By DXS Biology Editor Jeanne Garbarino

Do you remember that old candle experiment involving a lit candle in a jar? You know, the one where you place a lit candle in a bowl of water, then place a jar over the candle, and rather quickly, the candle extinguishes? If you were like me, you probably learned that the candle goes out because all of the oxygen gets used up (oxygen is a requirement for combustion).  However, according to David Isaac Rudel in his multi-volume series Science Myths Unmasked, this is one of the many science demonstrations that are wholly misinterpreted.

Unfortunately, the science textbooks used by thousands of schools across the US are chock-full of what Rudel calls “pseudo-explanations” for many complicated scientific phenomena. Instead of presenting clear explanations, including the establishment of a basic scientific foundation, many science textbooks present certain concepts using shortcuts, with the assumption that these so-called shortcuts make it easier for kids to understand science. 

Rudel argues that these shortcuts, which are often associated with an “abuse of [scientific] language,” only confuse students. In fact, included on the back cover of Science Myths Unmasked, Volume 2: Physical Sciences is a quote from Richard Feynman regarding science textbooks: “They said things that were useless, mixed-up, ambiguous, confusing, and partially incorrect. How anybody can learn science from these books, I do not know, because it’s not science.”

My husband, a public high school chemistry and biology teacher, is wholeheartedly aligned with this particular opinion of Feynman and Rudel and for many years, has not used a textbook to teach science. When I asked why, he simply stated, “They just confuse the kids.” 

As an example to what is wrong with science textbooks, let’s get back to the candle-in-a-jar experiment. In Science Myths Unmasked Volume 2: Physical Science, this very common scientific demonstration is thoroughly dissected, explaining why “the candle goes out when the oxygen content of the air is no longer high enough to support combustion” is an incorrect conclusion found in many textbooks, especially since it overlooks how the products of combustion affect the candle flame. After elaborating on the precise conditions point by point, and providing an outline for easy demonstrations to “expose the myth,” the following is stated:

Candles in closed containers do not go out because they use up all the oxygen.  Rather, the hot carbon dioxide (and to a lesser extent water vapor) given off in combustion accumulates at the top, pushing down other gases (most importantly, oxygen), and eventually stifles the flame. 

If the jar’s rim is submerged in water, the liquid rises not because water is replacing the oxygen used up in combustion.  Rather, the air inside the jar cools as the flame dies down and hot gases offload heat to the glass container.  As the air cools, it applies less pressure to the water than it did when the jar was first put over the candle.  The water rises as a result of the decreasing pressure from the air against it.           

In the Science Myths Unmasked series, a great number scientific factoids and processes that are often misrepresented in the classroom are correctly explained, and in great detail.  In addition to the candle experiment described above, Rudel tackles simple machines, circuits, phase change, and waves, just to name a few. However, this book is not for those without at least some background in science, as it does get technical. I would, though, recommend that these books find a way onto the shelves of science educators, as it seems they would benefit the most from the lessons and demonstrations covered. It is also good for people who, like me, have a scientific background and wish to properly explain scientific concepts to their kids, as I am sure those questions are bound to come up.  

For more on the Science Myths Unmasked series, go here.    

Pregnancy 101: On the cervical mucus plug and why I’ve never been more happy to hold something so disgusting in my hand

Like the eye of Sauron drawn to the One Ring, one cannot resist looking at the mucus plug.
June 3rd, 2007 fell on a Sunday. I awoke that morning feeling disappointed that I was still pregnant. My due date had come and gone and, honestly, I was sick of being a human incubator. I had enough of the heartburn, involuntary peeing, and the overall beached-whale feeling. The baby in utero was resting comfortably on my sciatic nerve, and I could barely walk. And perhaps even more important was the fact that I just wanted to finally meet the child I had grown from just a few cells!

Feeling like it would never come to be, I slowly waddled into the bathroom and somehow negotiated the tall edge of the bathtub in order to take a shower. As I stood allowing the hot water to pour down my back, I looked down at the giant watermelon growing from my abdomen and literally began to beg. “Little baby, please please PLEASE make your way out today!” Right at that moment, and I kid you not, my cervix released my mucus plug and deposited it into the palm of my hand.

Video of a mucus plug being poked and prodded with tweezers. Watch at your own risk.
Suddenly, I saw the light at the end of the pregnancy tunnel. I excitedly called for my husband. “Jim! You have to come see this!!” He came running in as he was already on edge, given the circumstances. “My mucus plug came out! Do you want to see it?” As much as he tried to resist looking at something that was potentially grotesque (and it was), instinct overrode logic. His actions did not match the words coming out of his mouth, which were along the lines of “hell no!” and, like Sauron responding to the wearing of the ring, his eyes were slowly drawn down to what was gently wobbling in the palm of my hand.   

The human eye is poised for setting its gaze upon things that are aesthetically pleasing and the mere mention of the word “mucus” could potentially elicit a queasy feeling in one’s gut. However, mucus plays a significant biological role in our bodies. In general, the mucus serves as a physical barrier against microbial invaders (bacteria, fungi, viruses) and small particulate matter (dust, pollen, allergens of all kinds). Protective mucus membranes line a multitude of surfaces in our bodies, including the digestive tract, the respiratory pathway, and, of course, the female reproductive cavity.

But when it comes to matters of ladybusiness, the function of mucus goes beyond that of a microbial defense system. Produced by specialized cells lining the cervix, which is the neck of the uterus and where the uterus and vagina meet, mucus also plays a role in either facilitating or preventing sperm from traveling beyond the vagina and into the upper reproductive tract.

For instance, cervical mucus becomes thinner around the time of ovulation, providing a more suitable conduit for sperm movement and swimming (presumably toward the egg). Furthermore, some components from this so-called “fertile” cervical mucus actually help prolong the life of sperm cells. Conversely, after the ovulation phase, normal hormonal fluctuations cause cervical mucus to become thicker and more gel-like, acting as a barrier to sperm. This response helps to prepare the uterus for pregnancy if  fertilization happens.

During pregnancy, a sustained elevation of a hormone called progesterone causes the mucus-secreting cells in the cervix to produce a much more viscous and elastic mucus, known as the cervical mucus plug. In non-scientific terms, the mucus plug is like the cork that keeps all of the bubbly baby goodness safe from harmful bacteria. It is quite large, often weighing in around 10 g (0.35 oz) and consists mostly of water (>90%) that contains several hundred types of proteins. These proteins do many jobs, including immunological gatekeepers, structural maintenance, regulation of fluid balance, and even cholesterol metabolism (cholesterol is an ever important component of healthy fetal development).
As a woman nears the end of a pregnancy, the cervix releases the mucus plug as it thins out in preparation for birth. Often, the thinning of the cervix can release some blood into the mucus plug, which is why some describe the loss of the mucus plug as a “bloody show.” However, losing the mucus plug is not necessarily an indication that labor is starting. Activities like sex or an internal cervical examination can cause the mucus plug to dislodge. It can fall out hours, days, or even weeks before labor begins. In my case, the loss of my mucus plug was associated with the onset of labor, which is why I have never been so happy to hold something so disgusting in my hand. 


Last week, I told the story of my two births, including the loss of my mucus plug, at an event called The Story Collider. I described the mucus plug as “a big hot gelatinous mess.” I pushed it a bit further by providing the following graphic imagery: “Picture a Jell-O jiggler, but instead of brightly colored sugar, it’s made up of bloody snot.” I was pleased with the audience response, which mostly consisted of animated face smooshing accompanied by grossed-out groans and sighs. For the rest of the evening, I heard people call to me from all over the bar by screaming “MUCUS PLUG!!!” Given the importance of the mucus plug during pregnancy (and mucus in general) combined with its comedic potential, its no wonder that it was a hit. Go mucus!


Jeanne Garbarino, Double X Science biology editor

References

Kamran Moghissi, Otto W. Neuhaus, and Charles S. Stevenson. Composition and properties of human cervical mucus. I. Electrophoretic separation and identification of proteins.. J Clin Invest. 1960 September; 39(9): 1358–1363.

Lee DC, Hassan SS, Romero R, Tarca AL, Bhatti G, Gervasi MT, Caruso JA, Stemmer PM, Kim CJ, Hansen LK, Becher N, Uldbjerg N. Protein profiling underscores immunological functions of uterine cervical mucus plug in human pregnancy. J Proteomics. 2011 May 16;74(6):817-28. Epub 2011 Mar 23.

Ilene K. Gipso. Mucins of the human endocervix. Frontiers in Bioscience 2001 October; 6, d1245-1255.

Merete Hein MD, Erika V. Valore MS, Rikke Bek Helmig MD, PhD, Niels Uldbjerg MD, PhD, Tomas Ganz PhD, MD. Antimicrobial factors in the cervical mucus plug. American Journal of Obstetrics and Gynecology 2002 July Volume 187, Issue 1, 137-144

Naja Becher, Kristina Adams Waldorf, Merete Hein & Niels Uldbjerg. The cervical mucus plug: Structured review of the literature. Acta Obstetricia et Gynecologica. 2009; 88: 502_513