Biology Explainer: The big 4 building blocks of life–carbohydrates, fats, proteins, and nucleic acids

The short version
  • The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
  • Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
  • Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
  • Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.                                                                                                      
  • The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.

                                                  

Big Molecules with Small Building Blocks

The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.

We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.
Carbohydrates

You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.

When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.

Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.

The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.

Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.

On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.

The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!

If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.

The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?

If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.

In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.

Sugar and Fuel

A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.

Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.

Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.

Polysaccharides: Fuel and Form

Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.

Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.

Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.

Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.

The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.

Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.

The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.

That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.

These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.

Lipids: The Fatty Trifecta

Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.

Fats: the Good, the Bad, the Neutral

Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?

Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows.  Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.

Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.

Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.

Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.

The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.

You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.

In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.

A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.

Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.

Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.

Phospholipids: An Abundant Fat

You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.

Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.

There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.

Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.

The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.

Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.

Steroids: Here to Pump You Up?

Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.

But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.

Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.

Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.

Proteins

As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.

Levels of Structure

Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.

For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.

This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.

Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.

The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.

In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.

A Plethora of Purposes

What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.

As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.

Nucleic Acids

How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.

Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.

DNA vs. RNA: A Matter of Structure

DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.

So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.

RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.

DNA vs. RNA: Function Wars

An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.

These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.

RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.


 By Emily Willingham, DXS managing editor 
This material originally appeared in similar form in Emily Willingham’s Complete Idiot’s Guide to College Biology

Biology Xplainer: Evolution and how it happens

Evolution: a population changes over time
First of all, in the context of science, you should never speak of evolution as a “theory.” There is no theory about whether or not evolution happens. It is a fact.

Scientists have, however, developed tested theories about how evolution happens. Although several proposed and tested processes or mechanisms exist, the most prominent and most studied, talked about, and debated, is Charles Darwin’s idea that the choices of nature guide these changes. The fame and importance of his idea, natural selection, has eclipsed the very real existence of other ways that populations can change over time.

Evolution in the biological sense does not occur in individuals, and the kind of evolution we’re talking about here isn’t about life’s origins. Evolution must happen at least at the populationlevel. In other words, it takes place in a group of existing organisms, members of the same species, often in a defined geographical area.

We never speak of individuals evolving in the biological sense. The population, a group of individuals of the same species, is the smallest unit of life that evolves.

To get to the bottom of what happens when a population changes over time, we must examine what’s happening to the gene combinations of the individuals in that population. The most precise way to talk about evolution in the biological sense is to define it as “a change in the allele frequency of a population over time.” A gene, which contains the code for a protein, can occur in different forms, or alleles. These different versions can mean that the trait associated with that protein can differ among individuals. Thanks to mutations, a gene for a trait can exist in a population in these different forms. It’s like having slightly different recipes for making the same cake, each producing a different version of the cake, except in this case, the “cake” is a protein.
Natural selection: One way evolution happens

Charles Darwin, a smart, thoughtful,
observant man. Via Wikimedia.
Charles Darwin, who didn’t know anything about alleles or even genes (so now you know more than he did on that score), understood from his work and observations that nature makes certain choices, and that often, what nature chooses in specific individuals turns up again in the individuals’ offspring. He realized that these characteristics that nature was choosing must pass to some offspring. This notion of heredity–that a feature encoded in the genes can be transmitted to your children–is inherent now in the theory of natural selection and a natural one for most people to accept. In science, an observable or measurable feature or characteristic is called a phenotype, and the genes that are the code for it are called its genotype. The color of my eyes (brown) is a phenotype, and the alleles of the eye color genes I have are the genotype.

What is nature selecting any individual in a population to do? In the theory of natural selection, nature chooses individuals that fit best into the current environment to pass along their “good-fit” genes, either through reproduction or indirectly through supporting the reproducer. Nature chooses organisms to survive and pass along those good-fit genes, so they have greater fitness.

Fitness is an evolutionary concept related to an organism’s reproductive success, either directly (as a parent) or indirectly (say, as an aunt or cousin). It is measured technically based on the proportion of an individual’s alleles that are represented in the next generation. When we talk about “fitness” and “the fittest,” remember that fittest does not mean strong. It relates more to a literal fit, like a square peg in a square hole, or a red dot against a red background. It doesn’t matter if the peg or dot is strong, just whether or not it fits its environment.

One final consideration before we move onto a synthesis of these ideas about differences, heredity, and reproduction: What would happen if the population were uniformly the same genetically for a trait? Well, when the environment changed, nature would have no choice to make. Without a choice, natural selection cannot happen–there is nothing to select. And the choice has to exist already; it does not typically happen in response to a need that the environment dictates. Usually, the ultimate origin for genetic variation–which underlies this choice–is mutation, or a change in a DNA coding sequence, the instructions for building a protein.

Don’t make the mistake of saying that an organism adapts by mutating in response to the environment. The mutations (the variation) must already be present for nature to make a choice based on the existing environment.

The Modern Synthesis

When Darwin presented his ideas about nature’s choices in an environmental context, he did so in a book with a very long title that begins, On the Origin of Species by Means of Natural Selection. Darwinknew his audience and laid out his argument clearly and well, with one stumbling block: How did all that heredity stuff actually work?

We now know–thanks to a meticulous scientist named Gregor Mendel (who also was a monk), our understanding of reproductive cell division, and modern genetics–exactly how it all works. Our traits–whether winners or losers in the fitness Olympics–have genes that determine them. These genes exist in us in pairs, and these pairs separate during division of our reproductive cells so that our offspring receive one member or the other of the pair. When this gene meets its coding partner from the other parent’s cell at fertilization, a new gene pair arises. This pairing may produce a similar outcome to one of the parents or be a novel combination that yields some new version of a trait. But this separating and pairing is how nature keeps things mixed up, setting up choices for selection.

Ernst Mayr, via PLoS.
With a growing understanding in the twentieth century of genetics and its role in evolution by means of natural selection, a great evolutionary biologist named Ernst Mayr (1904–2005) guided a meshing of genetics and evolution (along with other brilliant scientists including Theodosius Dobzhansky, George Simpson, and R.A. Fisher) into what is called The Modern Synthesis. This work encapsulates (dare I say, “synthesizes?”) concisely and beautifully the tenets of natural selection in the context of basic genetic inheritance. As part of his work, Mayr distilled Darwin’s ideas into a series of facts and inferences.

Facts and Inferences

Mayr’s distillation consists of five facts and three inferences, or conclusions, to draw from those facts.
  1. The first fact is that populations have the potential to increase exponentially. A quick look at any graph of human population growth illustrates that we, as a species, appear to be recognizing that potential. For a less successful example, consider the sea turtle. You may have seen the videos of the little turtle hatchlings valiantly flippering their way across the sand to the sea, cheered on by the conservation-minded humans who tended their nests. What the cameras usually don’t show is that the vast majority of these turtle offspring will not live to reproduce. The potential for exponential growth is there, based on number of offspring produced, but…it doesn’t happen.
  2. The second fact is that not all offspring reproduce, and many populations are stable in size. See “sea turtles,” above.
  3. The third fact is that resources are limited. And that leads us to our first conclusion, or inference: there is a struggle among organisms for nutrition, water, habitat, mates, parental attention…the various necessities of survival, depending on the species. The large number of offspring, most of which ultimately don’t survive to reproduce, must compete, or struggle, for the limited resources.
  4. Fact four is that individuals differ from one another. Look around. Even bacteria of the same strain have their differences, with some more able than others to with stand an antibiotic onslaught. Look at a crowd of people. They’re all different in hundreds of ways.
  5. Fact five is that much about us that is different lies in our genes–it is inheritable. Heredity undeniably exists and underlies a lot of our variation.
So we have five facts. Now for the three inferences:

  1. First, there is that struggle for survival, thanks to so many offspring and limited resources. See “sea turtle,” again.
  2. Second, different traits will be passed on differentially. Put another way: Winner traits are more likely to be passed on.
  3. And that takes us to our final conclusion: if enough of these “winner” traits are passed to enough individuals in a population, they will accumulate in that population and change its makeup. In other words, the population will change over time. It will be adapted to its environment. It will evolve.
Other mechanisms of evolution

A pigeon depicted in Charles Darwin’s
Variation of Animals and Plants
Under Domestication
, 1868. U.S.
public domain image, via Wikimedia.
When Darwin presented his idea of natural selection, he knew he had an audience to win over. He pointed out that people select features of organisms all the time and breed them to have those features. Darwin himself was fond of breeding pigeons with a great deal of pigeony variety. He noted that unless the pigeons already possessed traits for us to choose, we not would have that choice to make. But we do have choices. We make super-woolly sheep, dachshunds, and heirloom tomatoes simply by selecting from the variation nature provides and breeding those organisms to make more with those traits. We change the population over time.

Darwin called this process of human-directed evolution artificial selection. It made great sense for Darwinbecause it helped his reader get on board. If people could make these kinds of choices and wreak these kinds of changes, why not nature? In the process, Darwin also described this second way evolution can happen: human-directed evolution. We’re awash in it today, from our accidental development of antibiotic-resistant bacteria to wheat that resists devastating rust.

Genetic drift: fixed or lost

What about traits that have no effect either way, that are just there? One possible example in us might be attached earlobes. Good? Bad? Ugly? Well…they don’t appear to have much to do with whether or not we reproduce. They’re just there.

When a trait leaves nature so apparently disinterested, the alleles underlying it don’t experience selection. Instead, they drift in one direction or another, to extinction or 100 percent frequency. When an allele drifts to disappearance, we say that it is lost from the population. When it drifts to 100 percent presence, we say that it has become fixed. This process of evolution by genetic drift reduces variation in a population. Eventually, everyone will have it, or no one will.

Gene flow: genes in, genes out

Another way for a population to change over time is for it to experience a new infusion of genes or to lose a lot of them. This process of gene flow into or out of the population occurs because of migration in or out. Either of these events can change the allele frequency in a population, and that means that gene flow is another was that evolution can happen.

If gene flow happens between two different species, as can occur more with plants, then not only has the population changed significantly, but the new hybrid that results could be a whole new species. How do you think we get those tangelos?

Horizontal gene transfer

One interesting mechanism of evolution is horizontal gene transfer. When we think of passing along genes, we usually envision a vertical transfer through generations, from parent to offspring. But what if you could just walk up to a person and hand over some of your genes to them, genes that they incorporate into their own genome in each of their cells?

Of course, we don’t really do that–at least, not much, not yet–but microbes do this kind of thing all the time. Viruses that hijack a cell’s genome to reproduce can accidentally leave behind a bit of gene and voila! It’s a gene change. Bacteria can reach out to other living bacteria and transfer genetic material to them, possibly altering the traits of the population.

Evolutionary events

Sometimes, events happen at a large scale that have huge and rapid effects on the overall makeup of a population. These big changes mark some of the turning points in the evolutionary history of many species.

Cheetahs underwent a bottleneck that
has left them with little genetic variation.
Photo credit: Malene Thyssen, via
Wikimedia. 
Bottlenecks: losing variation

The word bottleneck pretty much says it all. Something happens over time to reduce the population so much that only a relatively few individuals survive. A bottleneck of this sort reduces the variability of a population. These events can be natural–such as those resulting from natural disasters–or they can be human induced, such as species bottlenecks we’ve induced through overhunting or habitat reduction.

Founder effect: starting small

Sometimes, the genes flow out of a population. This flow occurs when individuals leave and migrate elsewhere. They take their genes with them (obviously), and the populations they found will initially carry only those genes. Whatever they had with them genetically when they founded the population can affect that population. If there’s a gene that gives everyone a deadly reaction to barbiturates, that population will have a higher-than-usual frequency of people with that response, thanks to this founder effect.

Gene flow leads to two key points to make about evolution: First, a population carries only the genes it inherits and generally acquires new versions through mutation or gene flow. Second, that gene for lethal susceptibility to a drug would be meaningless in a natural selection context as long as the environment didn’t include exposure to that drug. The take-home message is this: What’s OK for one environment may or may not be fit for another environment. The nature of Nature is change, and Nature offers no guarantees.

Hardy-Weinberg: when evolution is absent

With all of these possible mechanisms for evolution under their belts, scientists needed a way to measure whether or not the frequency of specific alleles was changing over time in a given population or staying in equilibrium. Not an easy job. They found–“they” being G. H. Hardy and Wilhelm Weinberg–that the best way to measure this was to predict what the outcome would be if there were no change in allele frequencies. In other words, to predict that from generation to generation, allele frequencies would simply stay in equilibrium. If measurements over time yielded changing frequencies, then the implication would be that evolution has happened.

Defining “Not Evolving”

So what does it mean to not evolve? There are some basic scenarios that must exist for a population not to be experiencing a change in allele frequency, i.e., no evolution. If there is a change, then one of the items in the list below must be false:

·       Very large population (genetic drift can be a strong evolutionary mechanism in small populations)

·       No migrations (in other words, no gene flow)

·       No net mutations (no new variation introduced)

·       Random mating (directed mating is one way nature selects organisms)

·       No natural selection

In other words, a population that is not evolving is experiencing a complete absence of evolutionary processes. If any one of these is absent from a given population, then evolution is occurring and allele frequencies from generation to generation won’t be in equilibrium.

Convergent Evolution

Arguably the most famous of the
egg-laying monotremes, the improbable-
seeming platypus. License.
One of the best examples of the influences of environmental pressures is what happens in similar environments a world apart. Before the modern-day groupings of mammals arose, the continent of Australiaseparated from the rest of the world’s land masses, taking the proto-mammals that lived there with it. Over the ensuing millennia, these proto-mammals in Australiaevolved into the native species we see today on that continent, all marsupialsor monotremes.

Among mammals, there’s a division among those that lay eggs (monotremes), those that do most gestating in a pouch rather than a uterus (marsupials), and eutherians, which use a uterus for gestation (placental mammals).

Elsewhere in the world, most mammals developed from a common eutherian ancestor and, where marsupials still persisted, probably outcompeted them. In spite of this lengthy separation and different ancestry, however, for many of the examples of placental mammals, Australiahas a similar marsupial match. There’s the marsupial rodent that is like the rat. The marsupial wolf that is like the placental wolf. There’s even a marsupial anteater to match the placental one.

How did that happen an ocean apart with no gene flow? The answer is natural selection. The environment that made an organism with anteater characteristics best fit in South America was similar to the environment that made those characteristics a good fit in Australia. Ditto the rats, ditto the wolf.

When similar environments result in unrelated organisms having similar characteristics, we call that process convergent evolution. It’s natural selection in relatively unrelated species in parallel. In both regions, nature uses the same set of environmental features to mold organisms into the best fit.

By Emily Willingham, DXS managing editor

Note: This explanation of evolution and how it happens is not intended to be comprehensive or detailed or to include all possible mechanisms of evolution. It is simply an overview. In addition, it does not address epigenetics, which will be the subject of a different explainer.

Literal XX Xplainer: How we can live with two X chromosomes

This cat also haz those two chromosomes 
to blame for that splotch on its face.
By Emily Willingham, DXS managing editor

We are “Double X Science” because we target evidence-based information to women, most of whom carry two X chromosomes, although exceptions exist. Some women carry a single X chromosome, and some people can be XY and develop and/or identify as female. That’s one reason we mention “the woman in you” here at Double X Science.

But today, I’m writing about those of us who have at least two X chromosomes. You may know that usually, carrying around a complete extra chromosome can lead to developmental differences, health problems, or even fetal or infant death. How is it that women can walk around with two X chromosomes in each body cell–and the X is a huge chromosome–yet men get by just fine with only one? What are we dealing with here: a half a dose of X (for men) or a double dose of X (for women)?

X chromosome
(Source)
The answer? Women are typically the ones engaging in what’s known as “dosage compensation.” To manage our double dose of X, each of our cells shuts down one of the two X chromosomes it carries. The result is that we express the genes on only one of our X chromosomes in a given cell. This random expression of one X chromosome in each cell makes each woman a lovely mosaic of genetic expression (although not true genetic mosaicism), varying from cell to cell in whether we use genes from X chromosome 1 or from X chromosome 2.

Because these gene forms can differ between the two X chromosomes, we are simply less uniform in what our X chromosome genes do than are men. An exception is men who are XXY, who also shut down one of those X chromosomes in each body cell; women who are XXX shut down two X chromosomes in each cell. The body is deadly serious about this dosage compensation thing and will tolerate no Xtra dissent.

If we kept the entire X chromosome active, that would be a lot of Xtra gene dosage. The X chromosome contains about 1100 genes, and in humans, about 300 diseases and disorders are linked to genes on this chromosome, including hemophilia and Duchenne muscular dystrophy. Because males get only one chromosome, these X-linked diseases are more frequent among males–if the X chromosome they get has a gene form that confers disease, males have no backup X chromosome to make up for the deficit. Women do and far more rarely have X-linked diseases like hemophilia or X-linked differences like color blindness, although they may be subtly symptomatic depending on how frequently a “bad” version of the gene is silenced relative to the “good” version.

The most common example of the results of the random-ish gene silencing XX mammals do is the calico or tortoiseshell cat. You may have heard that if a cat’s calico, it’s female. That’s because the cat owes its splotchy coloring to having two X chromosome genes for coat color, which come in a couple of versions. One version of the gene results in brown coloring while the other produces orange. If a cat carries both forms, one on each X, wherever the cells shut down the brown X, the cat is orange. Wherever cells shut down the orange X, the cat is brown. The result? The cat can haz calico. 

Mary Lyon (Source)
Cells “shut down” the X by slathering it with a kind of chemical tag that makes its gene sequences inaccessible. This version of genetic Liquid Paper means that the cellular machinery responsible for using the gene sequences can’t detect them. The inactivated chromosome even has a special name: It’s called a Barr body. The XXer who developed a hypothesis to explain how XX/XY mammals compensate for gene dosage is Mary Lyon, and the process of silencing an X by condensing it is fittingly called lyonization. Her hypothesis, based on observations of coat color in mice, became a law–the Lyon Law–in 2011.


Barr bodies (arrows).
(Source)
Yet the silencing of that single chromosome in each XX cell isn’t total. As it turns out, women don’t shut down the second X chromosome entirely. The molecular Liquid Paper leaves clusters of sequences available, as many as 300 genes in some women. That means that women are walking around with full double doses of some X chromosome genes. In addition, no two women silence or express precisely the same sequences on the “silenced” X chromosome. 

What’s equally fascinating is that many of the genes that go unsilenced on a Barr body are very like some genes on the Y chromosome, and the X and Y chromosomes share a common chromosomal ancestor. Thus, the availability of these genes on an otherwise silenced X chromosome may ensure that men and women have the same Y chromosome-related gene dosage, with men getting theirs from an X and a Y and women from having two X chromosomes with Y-like genes.  

Not all genes expressed on the (mostly) silenced X are Y chromosome cross-dressers, however. The fact is, women are more complex than men, genomically speaking. Every individual woman may express a suite of X-related genes that differs from that of the woman next to her and that differs even more from that of the man across the room. Just one more thing to add to that sense of mystery and complexity that makes us so very, very double X-ey.


[ETA: Some phrases in this post may have appeared previously in similar form in Biology Digest, but copyright for all material belongs to EJW.]

Oocytelarge

Old ovaries, new eggs? Hatching a debate

Can adult women make new oocytes? 

by Sarah C.P. Williams      

For decades, biology textbooks have stated this as fact: “Women are born with all the eggs, or oocytes they will ever have.”1 The assumption — which shapes research on infertility and developmental biology, as well as women’s mindsets about their biological clocks — is that as women age, they use up those reserves they are born with. With each menstrual cycle, egg by egg, the stockpile wears down.

But is it true that women can’t produce any new oocytes in their adult life? Over the past decade, some scientists have begun to question the long-held assumption, publishing evidence that they can isolate egg-producing stem cells from adult human ovaries.

Last week, biologist Allan Spradling of the Howard Hughes Medical Institute and Carnegie Institution for Science, cast a shadow over those findings with a new analysis of the ovaries of adult female mice, which have similar reproductive systems to humans. By his measures of new egg formation, which he has previously studied and characterized during fetal development, there were no signs of activity in the adults.

“Personally, I think it’s quite clear,” says Spradling. “All the evidence has always said this. When oocyte development is going on, you see cysts everywhere. When you look at adults, you don’t see any.”

An oocyte, or egg cell, surrounded by some supporting cells.

The new paper does little to change the direction of those researchers already pursuing the stem cells, though. Jonathan Tilly of Massachusetts General Hospital was among the first to publish evidence that mice and human females have adult germ-line stem cells that can make new eggs.

“There’s so much evidence now from so many labs that have purified these cells and worked with these cells,” says Tilly. “What I don’t find of value is to say these cells don’t exist.”

For now, the two sides remain fractured — Spradling sees weaknesses in the way Tilly and others have isolated cells from the ovaries and suspects that the properties of the cells could change when they’re outside the body. And Tilly proposes that Spradling’s new data could be interpreted in a different way that in fact supports the presence of stem cells.

For women hoping for a scientific breakthrough to treat infertility — or even those simply curious about how their own body works — a consensus on the answer would be nice. But the continued probing on both sides may be just as much a boon to women’s health. After all, it’s questions like these that drive science forward.

In his new study, Spradling labeled a spattering of cells in the ovaries of female mice with fluorescent markers to make them visible and watched them as the mice aged. If any labeled cells were egg-producing stem cells, he says, they would spread the fluorescence as they made clusters of new eggs.

“But you never see clusters,” Spradling says. “Not once.”

In the process of this study, though, Spradling made new observations about how egg cells develop into their final form in female mice, published in a second paper this month. As the precursor cells to eggs mature, they lump together into cysts, a phenomenon also seen in the flies that Spradling has spent decades studying. In flies, one cyst eventually forms one egg. But in the mice, he discovered, those cysts break apart and form multiple eggs.

“This actually leads us to propose a new mechanism for what determines the number of oocytes,” says Spradling.  And, of course, that means a better understanding of reproductive biology.

On the side of those who are confident about the existence of adult ovarian stem cells, the field of fertility medicine could be revolutionized if the cells that Tilly has isolated from ovaries can form healthy egg cells that can be fertilized in vitro. These stem cells could also be a tool to study more basic questions on oocyte development and formation or a screening platform for fertility drugs. Tilly is confident enough in the research that he has founded a company, OvaScience, to pursue the commercial and clinical potential of isolating the stem cells.

“The value for the lay public is that we have a new tool in our arsenal,” says Tilly.

Spradling doesn’t argue that continued research in this area isn’t a good thing. “Scientific knowledge doesn’t just come from the proposal of ideas, but also from their rigorous tests,” he says. “I think the most powerful tool we have in medical science is basic research,” he adds, referencing research using cell and animal studies. Investigations of the basics of how and when oocytes form, he says, are the best way forward toward developing ways to improve egg cell formation or development and could even lead to infertility treatments.

So if it finds support from further studies, Spradling’s new work — which states bluntly right in its title that “Female mice lack adult germ-line stem cells” — needn’t be seen as bad news for those dreaming of a breakthrough in understanding fertility. Instead, whether or not egg stem cells end up having clinical value, it’s a step forward in advancing understanding about women’s reproductive biology.

As Spradling puts it: “You have a much better chance of actually helping someone with infertility if you know what the real biology is. Right now, we’re a ways from really understanding the full biology, but we’re making progress.”

1 Direct quote from the third edition of “Human Physiology: An Integrated Approach”, one published by Pearson Education in 2004 and used in medical school classes.  Continue reading

So What’s the Big Deal About the Higgs Boson, Anyway? A Physics Double Xplainer

The ATLAS detector at the Large Hadron Collider, one of four
detectors to discover a new particle.
By Matthew Francis, physics editor

After decades of searching and many promising results that didn’t pan out, scientists working at the Large Hadron Collider in Europe announced Wednesday they had found a new particle. People got really excited, and for good reason! This discovery is significant no matter how you look at it: If the new particle is the Higgs boson (which it probably is), it provides the missing piece to complete the highly successful Standard Model of particles and interactions. If the new particle isn’t the Higgs boson, well…that’s interesting too.

So what’s the big deal? What is the Higgs boson? What does Wednesday’s announcement really mean? What’s the meaning of life? Without getting too far over my head, let me try to answer at least some of the common questions people have about the Higgs boson, and what the researchers in Europe found. If you’d rather have everything in video form, here’s a great animation by cartoonist Jorge Cham and an elegant explanation by Ian Sample. Ethan Siegel also wrote a picture-laden joyride through Higgs boson physics; you can find a roundup of even more posts and information at Wired and at Boing-Boing. (Disclaimer: my own article about the Higgs is linked both places, so I may be slightly biased.)

Q: What is the Higgs boson?
A: The Higgs boson is a particle predicted by the Standard Model. It’s a manifestation of the “Higgs field”, which explains why some particles have mass and other particles don’t.

Q: Whoa, too fast! What’s a boson?
A: A boson is a large mammal indigenous to North America. No wait, that’s bison. [Ed note: Ha. Ha. Ha.] On the tiniest level, there are two basic types of particles: fermions and bosons. You’re made of fermions: the protons, neutrons, and electrons that are the constituents of atoms are all fermions. On a deeper level, protons and neutrons are built of quarks, which are also fermions. Bosons carry the forces of nature; the most familiar are photons–particles of light–which are manifestations of the electromagnetic force. There are other differences between fermions and bosons, but we don’t need to worry about them for now; if you want more information, I wrote a far longer and more detailed explanation at my personal blog.

Q: What does it mean to be a “manifestation” of a force?
A: The ocean is a huge body of water (duh), but it’s always in motion. You can think of waves as manifestations of the ocean’s motion. The electromagnetic field (which includes stuff like magnets, electric currents, and light) manifests itself in waves, too, but those waves only come in distinct indivisible chunks, which we call photons. The Higgs boson is also a manifestation of a special kind of interaction.

Q: How many kinds of forces are there?
A: There are four fundamental forces of nature: gravity, electromagnetism, and the two nuclear forces, creatively named the weak and strong forces. Gravity and electromagnetism are the forces of our daily lives: Gravity holds us to Earth, and electromagnetism does nearly everything else. If you drop a pencil, gravity makes it fall, but your holding the pencil is electromagnetic, based on how the atoms in your hand interact with the atoms in the pencil. The nuclear forces, on the other hand, are very short-range forces and are involved in (wow!) holding the nuclei of atoms together.

Q: OK, so what does the Higgs boson have to do with the fundamental forces?
A: All the forces of nature have certain things in common, so physicists from Einstein on have tried to describe them all as aspects of a single force. This is called unification, and to this day, nobody has successfully accomplished it. (Sounds like a metaphor for something or other.) However, unification of electromagnetism with the weak force was accomplished, yielding the electroweak theory. Nevertheless, there was a problem in the first version: It simply didn’t work if electrons, quarks, and the like had mass. Because particles obviously do have mass, something was wrong. That’s where the Higgs field and Higgs boson come in. Scottish physicist Peter Higgs and his colleagues figured out that if there was a new kind of field, it could explain both why the electromagnetic force and weak force behave differently and provide mass to the particles.

Q: Wait, I thought mass is fundamental?
A: One of the insights of modern physics is that particles aren’t just single objects: They are defined by interactions. Properties of particles emerge out of their interactions with fields, and mass is one of those properties. (That makes unifying gravity with the other forces challenging, which is a story for another day!) Some particles are more susceptible to interacting with the Higgs. An analogy I read (and apologies for not remembering where I read it) says it’s like different shoes in the snow. A snowshoe corresponds to a low-mass particle: very little snow mass sticks to it. A high-mass particle interacts strongly with the Higgs field, so that’s like hiking boots with big treads: lots of places for the snow to stick. Electrons are snowshoes, but the heaviest quarks are big ol’ hiking boots.

Q: Are there Higgs bosons running around all over the place, just like there are photons everywhere?
A: No, and it’s for the same reason we don’t see the bosons that carry the weak force. Unlike photons, the Higgs boson and the weak force bosons (known as the W and Z bosons — our particle physics friends run out of creative names sometime) are relatively massive. Many massive particles decay quickly into less massive particles, so the Higgs boson is short lived.

Q: So how do you make a Higgs boson?
A: The Higgs field is everywhere (like The Force in Star Wars), but to make a Higgs boson, you have to provide enough energy to make its mass. Einstein’s famous formula E = mc^2 tells us that mass and energy are interchangeable: If you have enough energy (in the right environment), you can make new particles. The Large Hadron Collider (LHC) at CERN in Europe and the Tevatron at Fermilab in the United States are two such environments: Both accelerate particles to close to the speed of light and smash them together. If the collisions are right, they can make a Higgs boson.

Q: Is this new particle actually the Higgs boson then?
A: That’s somewhat tricky. While the Standard Model predicts the existence of a Higgs boson, it doesn’t tell us exactly what the mass should be, which means the energy to make one isn’t certain. However, we have nice limits on the mass the Higgs could have, based on the way it interacts with other particles like the other bosons and quarks. This new particle falls in that range and has other characteristics that say “Higgs.” This is why a lot of physics writers, including me, will say the new particle is probably the Higgs boson, but we’ll hedge our bets until more data come in. The particle is real, though: four different detectors (ATLAS and CMS at CERN, and DZero and CDF at Fermilab) all saw the same particle with the same mass.

Q: But I’m asking you as a friend: Is this the Higgs boson?
A: I admit: a perverse part of me hopes it’s something different. If it isn’t the Higgs boson, it’s something unexpected and may not correspond to anything predicted in any theory! That’s an exciting and intriguing result. However, my bet is that this is the Higgs boson, and many (if not most) of my colleagues would agree.

Q: What’s all this talk about the “God particle”?
A: Physicists HATE it when the Higgs boson is called “the God particle.” Yes, the particle is important, but it’s not godlike. The term came from the title of a book by physicist Leon Lederman; he originally wanted to call it “The Goddamn Particle”, since the Higgs boson was so frustrating to find, but his editor forced a change.

Q: Why should I, as a non-physicist, care about this stuff?
A: While it’s unlikely that the discovery of the Higgs boson will affect you directly, particle colliders like the LHC and Tevatron have spurred development of new technologies. However, that’s not the primary reason to study this. By learning how particles work, we learn about the Universe, including how we fit into it. The search for new particles meshes with cosmology (my own area): It reveals the nature of the Universe we inhabit. I find a profound romance in exploring our Universe, learning about our origins, and discovering things that are far from everyday. If we limit the scope of exploration only to things that have immediate practical use, then we might as well give up on literature, poetry, movies, religion, and the like right now.

Q: If this is the Higgs boson, is that the final piece of the puzzle? Is particle physics done?
A: No, and in fact bigger mysteries remain. The Higgs boson is predicted by the Standard Model, but we know 80% of the mass of the Universe is in the form of dark matter, stuff that doesn’t emit or absorb light. We don’t know exactly what dark matter is, but it’s probably a particle — which means particle colliders may be able to figure it out. Hunting for an unknown particle is harder than looking for one we are pretty sure exists. Finding the Higgs (if I may quote myself) is like The Hobbit: It’s a necessary tale, but the bigger epic of The Lord of the Rings is still to come.

Double Xplainer: Once in a Blue Moon

Full Moon, from Flickr user Proggie under
Creative Commons license.
Tonight—August 31, 2012— is the second full Moon of August. The last time two full Moons occurred in the same month was in 2010, and the next will be in 2015, so while the events are rare, they aren’t terribly uncommon either. In fact, you’ve probably heard the second full Moon given a name: “blue moon”. (The Moon will not appear to be a blue color, though, cool as that would be. More on that in a bit.) What you may not know is that this term dates back only to 1946, and is actually a mistake.

According to Sky and Telescope, a premiere astronomy magazine (check your local library!), the writer James Hugh Pruett made an incorrect assumption about the use of the term “blue moon” in his March 1946 article. His source was the Maine Farmers’ Almanac, but he misinterpreted it. The almanac used “blue moon” to refer to the rare occasion when four full Moons happen in one season, when there are usually only three. By the almanac’s standards, tonight’s full moon is not a blue moon (though there will be one on August 21, 2013).

However, even that definition of “blue moon” apparently only dates to the early 19th century. In its colloquial, non-astronomical sense, a “blue moon” is something that rarely or never happens: like the Moon appearing blue. The Moon is white and gray when it’s high in the sky, and can appear very red, orange, or yellow near the horizon for the same reason the Sun does. As far as I can tell, the only time the Moon appears blue is when there’s a lot of volcanic ash in the air, also a rare event (thankfully) for most of the world. The popular song “Blue Moon” (written by everyone’s favorite gay misanthrope, Lorenz Hart) uses “blue” to mean sad, rather than rare.

I’m perfectly happy to keep the common mistaken usage of “blue moon” around, though, since it’s not really a big deal to me. Call tonight’s full Moon a blue moon, and I’ll back you up. However, because it’s me, let’s talk about the Moon and the Sun and why this stuff is kind of arbitrary.

The Moon and the Sun Don’t Get Along

The calendar used in much of the world is the Gregorian calendar, named for Pope Gregory XIII, who instituted it in 1582. The Gregorian calendar, in turn, was based on the older Roman calendar (known as the Julian calendar, for famous pinup girl Julie Callender Julius Caesar). The Romans’ calendar was based on the Sun: a year is the length of time for the Sun to return to the same spot in the sky. This length of time is approximate 365.25 days, which is why there’s a leap year every four years. (Experts know I’m simplifying; if you want more information, see this post at Galileo’s Pendulum.)

A problem arises when you try to break the year into smaller pieces. Traditionally, this has been done through reference to the Moon’s phases. The time to cycle through all the phases of the Moon is called a lunation, which is about 29 days, 12 hours, 44 minutes, and 3 seconds long. You don’t need to pull out a calculator to realize that a lunation doesn’t divide into a year evenly, but it’s still a reasonable way to mark the passage of time within a year, so it’s the foundation of the month (or moonth).

Many calendars—the traditional Chinese calendar, the Jewish calendar, and others—define the month based on a lunation, but don’t fix the number of months in a year. That means some years have 12 months, and others have 13: a leap month. It also means that holidays in these calendars move relative to the Gregorian calendar, such that Yom Kippur or the Chinese New Year don’t fall on the same date in 2012 that they did in 2011. (The Christian religious calendar combines aspects of the Jewish and the Gregorian calendars: Christmas is always December 25, but Easter and associated holidays are tied to Passover—which is coupled to the first full Moon after the spring equinox, and so can occur in a variety of dates in March and April.)

Another resolution to the problem of lunations vs. Sun is to ignore the Sun; this is what the Islamic calendar does. Months are defined by lunations, and the year is precisely 12 months, meaning the year in this calendar is 354 or 355 days long. This is why the holy month of Ramadan moves throughout the Gregorian year, happening sometimes in summer, and sometimes in winter.

The Gregorian calendar does things oppositely to the Islamic calendar: while months are defined, they are not based on a lunation at all. Months may be 30 days long (roughly one lunation), 31 days, or 28 days; the latter two options make no astronomical sense at all. Solar-only calendars have some advantages: since seasons are defined relative to the Sun, the equinoxes and solstices happen roughly on the same date every year, which doesn’t happen in lunation-based calendars. It’s all a matter of taste, culture, and convenience, however, since the cycles of Sun and the Moon don’t cooperate with the length of the day on Earth, or with each other.

Blue moons in the common post-1946 usage never happen in lunation-based calendar systems because by definition each phase of the Moon only occurs once in a month. On the other hand, the version from the Maine Farmers’ Almanac is relevant to any calendar system, because it’s defined by the seasons. As I wrote in my earlier DXS post, seasons are defined by the orbit of Earth around the Sun, and the relative orientation of Earth’s axis. Thus, summer is the same number of days whatever calendar system you use, even though it may not always be the same number of months. In a typical season, there will be three full Moons, but because of the mismatch between lunations and the time between equinoxes and solstices, some rare seasons may have four full Moons.

The Moon and Sun have provided patterns for human life and culture, metaphors for poetry and drama, and of course lots of superstition and pseudoscience. However, one thing most people can agree upon: the full Moon, blue or not, is a thing of beauty. If you can, go out tonight and have a look at it—and give it a wink in honor of the first human to set foot on it, Neil Armstrong.

Xplainer: How do you date a pregnancy?



Pregnancy
By Catherine Anderson, DXS contributor
[This post first appeared Musings of Genegeek.]

In the first case-based class of medical school, students are asked to answer a virtual patient’s question about the development of the fetus. These students are smart and they know all about betaHcG and are anxious to showcase their knowledge of the menstrual cycle with fluctuating levels of various hormones (FSH, progesterone, etc.). Yet one question brings confusion, “How pregnant is this woman?” The related question, “When does pregnancy start?” leaves the students flummoxed. Is it at conception? But how do you know when that happens? Or does implantation make more sense? It’s a great example of how detailed facts need the larger context.
The usual dating is gestational age, based on the first day of your last menstrual period. However, you can also date a pregnancy with embryological age, starting at conception.
How you date a pregnancy can depend on your perspective. My very general guideline:
  • Pregnant woman is the focus = gestational age (e.g., obstetricians) 1
  • Focus on embryological/fetal development = embryological age (e.g., developmental biologist) 2
But why are there two types of dates? We might need a bit of a primer on the menstrual cycle and how it relates to pregnancy.

Implantation happens between days 20 and 22. Pregnancy is often detected after the first missed period.
This graphic is intentionally simple, removing all the hormones and other fun stuff (Ed: which you can find here). You’ll note that it says approximately day 14 and day 28. In textbooks, we often see that women have 28-day cycles and everything has a nice schedule. However, women are not textbooks and sometimes have shorter or longer cycles and/or have ovulation at slightly different times. Therefore, knowing when fertilization and conception happen can be a bit tricky. An obvious marker is the first day of the last menstrual period (LMP). Why the last day? Well, another variable is the length of menses but everyone has a first day so to be consistent, that is the marker used.
We generally use gestational age when discussing pregnancy. So when someone says that they are 8 weeks pregnant, they mean it has been 8 weeks since the first day of the LMP (last menstrual period).
But that means that the first two weeks of pregnancy has nothing happening. If you are concerned about development, you don’t start counting at week 3 but start at the time of fertilization, two weeks later. Therefore, the embryological age is generally two weeks later.
But remember, we have essentially picked gestational age as the convention for discussing pregnancy dates. If  there are markers in development to suggest that the embryological age is different (for example, the fetus is 12 weeks, not 13 weeks), the gestational age is often reported to the mother. In our example, the dating would be changed to 14 weeks.
Due to the difference in these dates, we see confusion beyond medical students thinking about this for the first time. It was recently reported that Arizona had changed its abortion law to be the most restrictive – but it hadn’t. It had just joined other states in making the limit 20 weeks gestational age. Remember, this is the accepted convention for pregnancy dating – but many articles picked up on that initial two weeks of nothingness in gestational age and confused it with embryological age. Was this an example of details without understanding of the greater context?
——
  1. Synonyms include obstetrical and menstrual age. 
  2. Synonyms include developmental, conception, and fetal age. 

Opinions expressed in this piece are those of the author and do not necessarily reflect or conflict with the opinions of DXS editors or contributors.
————————————
Dr. Catherine Anderson is a Clinical Instructor for the Faculties of Medicine and Dentistry for UBC in Vancouver, Canada. She also leads the Future Science Leaders program, helping teens excel in science and technology. She received her PhD in Medical Genetics and has spent the last 10 years helping people understand the biological sciences: the information and the impact on our lives. You can follow her on Twitter @genegeek.

Double Xpression: Meghan Groome

Meghan Groome, PhD, Director of K12 Education and Science & the City, New York Academy of Sciences
[Ed. note: Double X Science has started a new series: Double Xpression: Profiles of Women into Science. The focus of these profiles is how women in science express themselves in ways that aren’t necessarily scientific, how their ways of expression inform their scientific activities and vice-versa, and the reactions they encounter.]
Today’s profile is an interview with Meghan Groome, PhD, New York Academy of SciencesDirector of K12 Education and Science & The City, who answered our questions via email with DXS Biology Editor Jeanne Garbarino.

DXS: First, can you give me a quick overview of what your scientific background is and your current connection to science?

MG: I was a bio major since age two. Growing up (and still today) I had a deep love of all things gross, icky, creepy, and crawly and a deep dislike of anything math related. My parents didn’t really know what to do with me, so a theme to my scientific background is that although I was a straight-A student in my bio classes, no one had any idea that I should be doing enrichment programs or making an effort to learn math. I figured that by being a great bio major, I would become a great scientist. So I was an excellent consumer of scientific knowledge but only realized late in life that I needed to be a producer to actually become a scientist.

Being a straight-A student doesn’t actually get you a job when you graduate from a small liberal arts college with a degree in biology and theater, and out of desperation, I took a job teaching. While I wasn’t a good scientist, I turned out to be an excellent teacher and loved the creativity, energy, and never-ending questions that go along with being a science teacher. If you teach from the perspective that science is an endless quest for knowledge, you’ll never get bored taking kids on that journey.

While my background is in biology, my graduate degree is in science education, and I study gender dynamics and student questioning the middle-school classrooms. I currently work for the New York Academy of Sciences as the Director of K12 Education and public programs and spend most of my day convincing scientists that education outreach is not only part of their jobs but a lot of fun.

DXS: What ways do you express yourself creatively that may not have a single thing to do with science?

MG: I’m also a photographer and spend a lot of time wandering around neighborhoods in Brooklyn with a special love of decaying buildings and empty lots. I love how nature conquers things that we humans consider to be permanent – like how we have to constantly beat back the invading hordes of plants and animals even in one of the most man-made environments in the world.

I was also a theater major, so (I) have a strong background in costume design and stage directing. I hate acting but love dance. If I had any talent I would have become a musical theater star but unfortunately enthusiasm and determination can only get you so far.

DXS: Do you find that your scientific background informs your creativity, even though what you do may not specifically be scientific?

MG: I find great joy in seeing how nature conquers human engineering. When I learned about Lynn Margulis’ Gaia hypothesis, I began seeing it everywhere and I think I love photography because I’m documenting the Earth fighting back.

Most of my creative energy comes from working with kids and listening to the wonderful way in which they think about the natural world. Adults can be so rigid in their thinking and are often afraid to say ideas that are out of the mainstream thinking. The older a kid gets, the more we expect them to conform to the adult way of thinking. Middle-school kids are old enough to express their wacky ideas, and young enough to not recognize that their ideas are considered “wrong.”

DXS: Have you encountered situations in which your expression of yourself outside the bounds of science has led to people viewing you differently–either more positively or more negatively?

MG: People tell me all the time “You’re not what we expected” and I’m not really sure how to respond.

In the science education world, my research is informed by my experiences teaching in a very poor district and from a social justice perspective. It’s a rather controversial theoretical framework because it says, “I have an agenda to use my research to bring about equity in an unequal world.” From a research perspective, it means you need to be explicit in your point of view and your biases and have much greater validity and reliability to show that your research is solid. My work is very passion driven so I’ve had to learn when it’s appropriate to pull out my soap box and go full-out social justice to them.

This is changing, but for a long time I kept my personality under wraps in a professional setting. It’s only now — with 10 years professional experience, great organizations on my resume, and a PhD — that I can be clever, confront those I disagree with, and even smile. Anyone who’s ever had a beer with me knows that I’m a goofball and will do just about anything to make someone laugh. I’m a science person, a theater person, a teacher, researcher, policy maker, consultant, and have seen a lot of exquisitely bad and good stuff in my life and so I am frequently the voice of an outsider even though I look and sound like a total insider. That can really freak people out especially if they’ve only read my bio or seen me in my most professional mode.
DXS: Have you found that your non-science expression of creativity/activity/etc. has in any way informed your understanding of science or how you may talk about it or present it to others?

MG: I approach teaching science from a fairly theatrical perspective. In my class we dance, sing, laugh, talk about the real world. I’ve never used the textbook, and I’m very insistent that everything be in the first person when writing or speaking about science. I much prefer teaching regular classes — not honors or AP — and can’t stand kids who remind me of myself in high school.

I approach scientists in the same way and try to make them comfortable admitting that their more than a brain on a stick. I’ve found one of the biggest fears of young scientists is that their PI will find out that they’re interested in something more than life in the lab so I always try to work within the existing power structure and make sure the PIs and Deans indicate to them that working with the (New York) Academy (of Sciences) is okay.

DXS: How comfortable are you expressing your femininity and in what ways? How does this expression influence people’s perception of you in, say, a scientifically oriented context?

MG: This question confounds the heck out of me. I am still such a tomboy and have always chosen to present myself as a somewhat genderless individual. I’ve always considered myself “smart not pretty” because I can control how smart I am but not how pretty. A few years ago, my sisters pulled me aside and told me I needed to stop dressing like such a slob. They started buying me pretty, fashionable clothes and insisting that I wear skirts above the knee and get a real hair cut.

Since I started working at the Academy, I have a very public facing role and have grown to accept that I should look nice. This goes along with slowly feeling comfortable letting my personality out in professional settings but I still consider myself a tomboy and consider my outward appearance to be a costume designed to do a job.

So I guess the answer is, femininity, what femininity?

DXS: Do you think that the combination of your non-science creativity and scientific-related activity shifts people’s perspectives or ideas about what a scientist or science communicator is? If you’re aware of such an influence, in what way, if any, do you use it to (for example) reach a different corner of your audience or present science in a different sort of way?

MG: I think very few people are brains on a stick but that being a scientist often requires us to pretend we have no life outside the lab. I’ve now worked with hundreds of young scientists who spend time working with kids and I’m so pleased to see how quickly they shift from lab geek to real person when talking with a 4th grader. I want scientists to be evangelicals for science, and I want that to include the fact that scientists are real, fallible, wacky, wonderful people too.

DXS: If you had something you could say to the younger you about the role of expression and creativity in your chosen career path, what would you say?

MG: I was always encouraged to be an individual and be myself. I credit my parents with allowing me to pursue my passion and not try to box me in to one identity. It’s never been easy to forge my own path, and I dedicate a lot of myself to my work.

My advice to my younger self would be to slow down a bit, know that you don’t have to get 100% on everything, and know that the problems of the world don’t have to be solved right now.

And perhaps to learn how to be a bit more like a girl. It’s incredibly powerful to see yourself as smart and pretty.


———————————————————————
Meghan Groome is the Director of K12 Education and Science & the City at the New York Academy of Sciences, an organization with the mission to advance scientific research and knowledge, support scientific literacy, and promote the resolution of society’s global challenges through science-based solutions. After graduating from Colorado College in Biology and Theatre, she desperately needed a job and took one as a substitute teacher at a middle school in Ridgewood, NJ. She discovered that she had a knack for making science interesting and enjoyable, mostly through bringing in gross things, lighting things on fire (but always in a safe manner), and having a large library of the world’s best science writing and science fiction. After teaching in both Ridgewood and Paterson, NJ, she completed her PhD at Teachers College (TC) Columbia University with a focus on student question-asking in the classroom. While at TC, she was a founding member of an international education consulting firm and worked on projects from Kenya to Jordan with a focus on designing new schools and school systems in the developing world. 

After graduating, Dr. Groome became a Senior Policy Analyst at the National Governors Association on Governor Janet Napolitano’s Innovation America Initiative. Prior to her work at the Academy, Dr. Groome worked at the American Museum of Natural History and authored the policy roadmap for the Empire State STEM Education Network and taught urban biodiversity in the Education Department. At the Academy, she is responsible for the Afterschool STEM Mentoring program, which places graduate students and postdocs in the City’s afterschool programs, and the Science Teacher program, where she designs field trips and content talks to the City’s STEM teachers. Connect with her on Twitter, and read her NYAS blog!

Blog of the Week: PsiVid with Carin Bondar and Joanne Manaster


This week’s blog selection comes to us via the Scientific American blog network. PsiVid, a “cross-section of science on the cyber-stream,” features two scientists and mothers, Carin Bondar and Joanne Manaster, both known for their work and expertise in sci-filmmaking. Over at their Scientific American blog, you’ll find all things sci-video related, including contest information, the Monday music video, and some commentary on what’s happening–and not happening–in the world of science videos.

Carin Bondar, biologist and mother of four, is indeed a biologist with a twist, as her Website will tell you. She’s a formally trained ballet dancer, a video star (proving that yes, bionerds can be smart and videogenic), scifilm curator, and writer. Her Website includes postings of Cool Biology Job of the Week, the kinds of jobs that make you briefly wish you could turn back time, ditch everything you own, and hop on board. If you’re a parent nurturing the little scientist in your life, just reading some of these job descriptions to them should prove inspiring.

Like many of us, Carin confesses to having been “in love with biology” since she was a little girl. Her love has led her to her multiple careers as filmmaker and writer, and she still brings the biology through her insightful dissections of topics like chastity belts and cross-dressing in the animal kingdom. 

If you’d like to at least try to keep up with this whirling–pirouetting?–dervish of a scientist and mother, you can follow Carin on Twitter @drbondar. She’s smart, funny, engaging, kind, and lovely. Don’t miss out.


Joanne Manaster is a biologist, mom, and former model who, like Carin, defies any lingering stereotype of biologists as bald, bearded men in white coats. Her Website, aptly named “Joanne loves science,” includes Joanne’s famous video book reviews, posts about the science of beauty, and Joanne’s own favorite makeup videos

When she’s not in front of a camera improving science literacy, Joanne is a lecturer in cell and molecular biology at the University of Illinois. She’s a veteran science instructor who now focuses on developing and teaching online science courses for current and future science teachers. Joanne is rather notorious for her videos that make science seem delicious, including gummi bear science and blood cell bakery–they’re cookies! In her clearly limited spare time, Joanne also runs a girls bioengineering camp and, as she says, does everything she can “to ensure that her four children all become passionate about something.” An important goal.

To follow this passionate advocate for science literacy who puts her videos–and cookies–where her mind is, you can find her as @sciencegoddess on Twitter.

Think pink? I’d rather raise a stink

Are some of these possible signs of breast cancer present
in a famous work of art? Image: public domain, US gov
by Liza Gross, contributor
[Ed. note: This article was originally posted on KQED QUEST on October 3, 2012. It is reposted here with kind permission.]
Just a generation ago, October belonged to the colors of fall, when “every green thing loves to die in bright colors,” as Henry Ward Beecher said. (Growing up back East, you read a lot of odes to fall foliage in school.) For years after moving to the Bay Area from Pennsylvania, I felt a twinge of melancholy when October rolled around, knowing the once-demure woodlands would let loose in a fleeting blaze of brash reds and orange-tinged yellows without me.
Now, of course, October belongs to all things pink, as high-profile outfits from the NFL to Ace Hardware set aside 31 days to raise awareness and money for Breast Cancer Awareness Month. (National Breast Cancer Awareness Month was launched in 1985 by CancerCare, a nonprofit cancer support group, and cancer-drug maker AstraZeneca.)
But as women’s health advocate Dr. Susan Love says, awareness of the disease isn’t the issue. “When the NFL is wearing pink gloves, I think you can say we’re aware,” she said last year. “But the awareness isn’t enough.”
Even raising money isn’t enough. You have to ask where that money is going.
It’s a message that gets lost in an ocean of pink-ribbon products (from bagels and teddy bears to vodka and wine glasses), even though critics like the San Francisco-based nonprofit Breast Cancer Action have warned about “pinkwashing” for years, urging people to look behind the feel-good messages to see who’s really benefiting from the commercialization of cancer.
Breast Cancer Action’s Think Before You Pink—Raise a Stink! campaign encourages consumers to think critically about pink products and ask four simple questions to find out what proportion of proceeds go to breast cancer programs and whether the products sold are safe. The group has especially targeted cosmetics companies for marketing pink merchandise even as they sell products with toxic ingredients. (For more information, download the group’s 30-page “toolkit”.)
The group also urges companies to be more transparent and has long called out those it believes use a good cause to increase their bottom line.
Like Eureka, which donated a dollar for every vacuum cleaner sold in its “Clean for the Cure” campaign. Or American Express, which donated a penny per transaction in its “Charge for the Cure.” Both companies bowed out of the pink sweepstakes after Breast Cancer Action asked just how breast cancer patients were benefiting from the campaigns in a 2002 ad in the New York Times.


In October 2000, the San Francisco-based advocacy group 

Breast Cancer Action ran a full page ad in the New York Times 
West Coast Edition with text (not shown) inviting readers to 
participate in its ”Stop Cancer Where It Starts” Campaign. 
The campaign criticized breast cancer awareness campaigns 
for pushing early detection and mammograms 
(without acknowledging their limitations) while ignoring prevention. 
(Image: Courtesy Breast Cancer Action)

Others, like KFC with its 2010 “Buckets for the Cure” campaign, climb on the pink bandwagon to peddle decidedly unhealthy products. Stephen Colbert’s take on the “pink bucket dilemma” shows just how ludicrous cause marketing has become. (Forward to 1:13.)

But even when money goes to breast cancer programs and not corporate coffers, is it going to the right place? Love (and several advocacy groups) has said for years that we need to shift our focus from cures to causes—and prevention.
If we can develop a vaccine for cervical cancer, says Love, why not for breast cancer? Early results of a clinical trial show promising results for a vaccine designed to prevent recurrence of one form of breast cancer. (The data were presented at a meeting and have not yet gone through peer review.)
As I wrote in May, Love’s Research Foundation is looking for volunteers in her online Army of Women to identify potential causes in order to eradicate the disease. (Anyone can sign up.)


In the late 1990s, The Breast Cancer Fund, the American Cancer Society, 

and the Susan G. Komen Breast Cancer Foundation invited American 
artists and writers to submit work about their breast cancer experiences. 
The resulting exhibit (and book)—Art.Rage.Us.—opened in 1998 
at San Francisco’s Main Library. At the time, project coordinator and 
Breast Cancer Action Co-founder Susan Claymon said, 
“Art.Rage.Us. presents deeply moving and beautiful expressions 
from women with breast cancer, along with intensely personal 
statements that provide a window into their hearts and minds.” 
Claymon died of breast cancer in 2000. She was 61.

Prevention is also a primary concern for the Athena Breast Health Network, a partnership of the five University of California medical centers that collects personalized data on breast cancer patients to optimize treatment and ultimately figure out how to stop cancer before it starts. The site also includes a comprehensive list of breast cancer risk factors.

Recent research suggests that the biology behind one of the listed risk factors, dense breast tissue, may be more complicated than previously thought. Earlier studies found that women with dense breasts had a higher risk of developing breast cancer. (And this finding led to the“right to know” legislation that Gov. Brown recently signed, requiring doctors to tell women if their mammograms show they have dense breasts.) But a recent study in the Journal of the National Cancer Institute suggests that women with denser breasts are not more likely to die of breast cancer. The greatest risk was found for women who had the fattiest breast tissue, a condition linked to obesity. This suggests that if you have dense breast tissue, you may be more likely to get cancer—but not die of it. Love’s blog explained the significance of the findings:
The recent study on breast density showed us, yet again, that women who are obese when they are diagnosed with breast cancer are more likely to die of breast cancer than women who are not obese. Doctors need to do more than tell women about their breast density or remind them to get a mammogram. They need to be teaching women the importance of exercising, losing weight (if necessary) and eating a well-balanced diet—both before and after a breast cancer diagnosis. Continue reading