Biology Explainer: The big 4 building blocks of life–carbohydrates, fats, proteins, and nucleic acids

The short version
  • The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
  • Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
  • Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
  • Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.                                                                                                      
  • The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.


Big Molecules with Small Building Blocks

The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.

We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.

You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.

When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.

Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.

The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.

Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.

On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.

The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!

If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.

The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?

If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.

In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.

Sugar and Fuel

A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.

Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.

Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.

Polysaccharides: Fuel and Form

Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.

Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.

Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.

Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.

The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.

Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.

The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.

That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.

These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.

Lipids: The Fatty Trifecta

Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.

Fats: the Good, the Bad, the Neutral

Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?

Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows.  Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.

Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.

Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.

Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.

The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.

You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.

In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.

A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.

Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.

Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.

Phospholipids: An Abundant Fat

You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.

Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.

There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.

Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.

The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.

Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.

Steroids: Here to Pump You Up?

Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.

But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.

Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.

Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.


As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.

Levels of Structure

Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.

For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.

This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.

Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.

The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.

In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.

A Plethora of Purposes

What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.

As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.

Nucleic Acids

How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.

Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.

DNA vs. RNA: A Matter of Structure

DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.

So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.

RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.

DNA vs. RNA: Function Wars

An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.

These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.

RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.

 By Emily Willingham, DXS managing editor 
This material originally appeared in similar form in Emily Willingham’s Complete Idiot’s Guide to College Biology

The Only Mother’s Day Gift Guide You Will Ever Need

Our mothers were nothing like either of these people. (Source)

(Warning: We are having some fun, so what you are about to read does not explicitly contain science but does reference soy, onanism, tubed meats, and vacuums. In keeping with the DXS mission, however, we have embedded a little science here and there in the links. )

While the celebration of mothers is not a new concept, the modern version of Mother’s Day is a far cry from the ancient festivals that honored Cybele.  However, in 1907, when Anna Jarvis invented the modern Mother’s Day as a means to pay homage to her own mother, it was not her intention to use moms for profit.

But, alas, by the 1920s, this well-intended national holiday quickly morphed into the cash cow we see today.  Sure, it is nice to receive a gift, but perhaps capitalism has since stripped Mother’s Day of its original meaning, and for the first week or two in May, we are bombarded with advertisements that claim to know what item every mother must have.  

From this, many sites have done us all the great favor of curating these cannot-live-without gifts into a single, easy to navigate list (financial kickbacks notwithstanding), often broken down into natural June Cleaveresque categories like “kitchen” and “for the home” (read: how to cook for everyone and keep shit clean). Besides the fact that these lists can be generalized to every gift-giving holiday for the lovely lady in your life, even Don Draper himself would scoff at many of these suggestions.  

Because we at DXS wish to ensure that your Mother’s Day experience is the best it can possibly be, we present you with a different kind of list – one that provides the most valuable unsolicited advice you will ever receive when it comes to choosing for dear old mom.  Here, you will be schooled on what not to get for the woman that gave you life.   

  1. Flowers. One of the most suggested gifts for Mother’s Day is flowers. What woman doesn’t love flowers? Well, one who does not need one more thing to water, or sees her own mortality in each dried up petal aimlessly floating down onto the floor that had just been cleaned. Oh, and those tears you see building up in our eyes? Not tears of joy. You better back up or you might get caught in a sneezing fit of fury because, frankly, the last thing we want to do on “our” day is pretend that we like feeling like our heads will explode. And let us not forget how those flowers came to be available in your local flower shop or supermarket in the first place… from Colombia?

  1. Soy Candles.  Soy. For the last decade or two, we have seen the magical benefits of this plant product popping up in pseudoscientific “reports” in quality magazines like First for Women. And now, soy-pushers all over the intertubes will willingly exclaim that soy is the superior material for the production of candles, allegedly “soot free” (they aren’t really). Sure, anything soy-based will help the American Soy Farmer keep up with the Joneses, but a candle is a candle and unless you are also giving me a golden ticket to enjoy its inherent ambience whilst I soak in my imaginary claw-footed tub, full of bubbles and rose petals and the sultry sounds of Barry White, save it. Plus, I’d rather not burn my house down (again).

  1. Gift Baskets! What says “I admire you like a work colleague” more than a gift basket?  Sure, smoked cheeses and tubed meats taste fine after a few martinis, but when enjoying such delicacies, I prefer to do it while watching my co-workers photocopy their ass cheeks. Some things just don’t have the same effect in the home.  

  1. Teething Necklace. One website was flashing necklaces all over the place – but these weren’t just any old necklaces – they doubled as teething necklaces for the baby.  Anyone who knows anything about a teething baby knows that, despite the alleged pain babies feel (hey, I don’t remember it, do you??), moms suffer the most. So instead of the necklace, why don’t you go ahead and take the dang baby for a few hours and give me a much deserved break? I’ll even sweeten the deal and throw some Tylenol in the diaper bag. And, in the strange and rare event that I might want rope burn on my neck, I’d rather get it from some fantasy role-playing in the boudoir. Take that as you will.

  1. Vacuum Cleaner. If you really think that I want another reminder of how much I have to pick up after you and all of your friends – who regularly come over and wipe out all of the food I just deposited into my refrigerator – then yes, go ahead and buy me a vacuum. I mean, it is not like I don’t already spend all of my “free” time vacuuming the floors, so why not give me the gift that embodies what you really think of me (your maid)? Plus, Dyson has been showing commercials non-stop for a sale that runs until Mother’s Day, with the clear implication to get your mother (or if you are a mother to get yourself) a vacuum for Mother’s Day. So if you do decide to get a vacuum, make sure you have $500 for it. Remember, though, that a vacuum is really empty space, so you might want to consider getting me something more tangible–and fun.

  1. 50 Shades of Gray. Well, maybe I am not too opposed to this, but let it be known that I will probably need about ten minutes (give or take) of “alone” time after each time I pick up this series. As long as you are OK with this, I am OK with this. By the way, did you know that there are really more than 50 shades of grey?
We hope you will seriously consider this advice. After all, we really don’t need more shit to take care of, water, clean with, or… actually, we can always use some more good reads. Happy Mother’s Day!

Motherhood, war, and attachment: what does it all mean?

The antebellum tales
Scene 1: Two fathers encounter each other at a Boy Scout meeting. After a little conversation, one reveals that his son won’t be playing football because of concerns about head injuries. The other father reveals that he and his son love football, that they spoke with their pediatrician about it, and that their son will continue with football at least into middle school. There’s a bit of wary nodding, and then, back to the Pinewood Derby.

Scene 2: Two mothers meet on a playground. After a little conversation about their toddlers, one mother mentions that she still breastfeeds and practices “attachment parenting,” which is why she has a sling sitting next to her. The other mother mentions that she practiced “cry it out” with her children but that they seem to be doing well and are good sleepers. Then one of the toddlers begins to cry, obviously hurt in some way, and both mothers rush over together to offer assistance.

Scene 3: In the evening, one of these parents might say to a partner, “Can you believe that they’re going to let him play football?” or “I can’t believe they’re still breastfeeding when she’s three!” Sure. They might “judge” or think that’s something that they, as parents, would never do.

But which ones are actually involved in a war?

War. What is it good for?

I can’t answer that question, but I can tell you the definition of ‘war’: “a state of armed conflict between different nations or states or different groups within a nation or state.” Based on this definition and persistent headlines about “Mommy Wars,” you might conclude that a visit to your local playground or a mom’s group outing might require decking yourself out cap-á-pie in Kevlar. But the reality on the ground is different. There is no war. Calling disputes and criticisms and judgments about how other people live “war” is like calling a rowboat on a pond the Titanic. One involves lots of energy release just to navigate relatively placid waters while the other involved a tremendous loss of life in a rough and frigid sea. Big difference.

I’m sure many mothers can attest to the following: You have friends who also are mothers. I bet that for most of us, those friends represent a spectrum of attitudes about parenting, education, religion, Fifty Shades of Grey, recycling, diet, discipline, Oprah, and more. They also probably don’t all dress just like you, talk just like you, have the same level of education as you, same employment, same ambitions, same hair, or same toothpaste. And I bet that for many of us, in our interactions with our friends, we have found ourselves judging everything from why she insists on wearing those shoes to why she lets little Timmy eat Pop Tarts. Yet, despite all of this mental observation and, yes, judging, we still manage to get along, go out to dinner together, meet at one another’s homes, and gab our heads off during play dates.

That’s not a war. That’s life. It’s using our brains as shaped by our cultural understanding and education and rejection or acceptance of things from our own upbringing and talks with medical practitioners and books we’ve read and television shows we’ve watched and, for some of us, Oprah. Not one single friend I have is a cookie cutter representation of me or how I parent. Yet, we are not at war. We are friends. Just because people go online and lay out in black and white the critiques that are in their heads doesn’t mean “war” is afoot. It means expressing the natural human instinct to criticize others in a way that we think argues for Our Way of Doing Things. Online fighting is keeping up with the virtual Joneses. In real life, we are friends with the Joneses, and everyone tacitly understands what’s off limits within the boundaries of that friendship. That’s not war. It’s friendly détente.

The reality doesn’t stop the news media from trying to foment wars, rebellions, and full-on revolutions with provocative online “debates” and, lately, magazine covers. The most recent, from Time, features a slender mother, hand on cocked hip, challenging you with her eyes as she nurses her almost-four-year-old son while he stands on a chair. As Time likely intended, the cover caused an uproar. We’ve lampooned it ourselves (see above).

But the question the cover asks in all caps, “Are you mom enough?” is even more manipulative than the cover because it strikes at the heart of all those unspoken criticisms we think–we know–other women have in their heads about our parenting. What we may not consider is that we, too, are doing the same, and still… we are not actually at war. We’re just women, judging ourselves and other women, just like we’ve done since the dawn of time. It’s called “using your brain.” Inflating our interactions and fairly easily achieved parental philosophy détentes to “war” caricatures us all as shrieking harpies, incapable of backing off and being reasonable.

The real question to ask isn’t “Are you mom enough?” In fact, it’s an empty question because there is no answer. Your parenting may be the most perfect replica of motherhood since the Madonna (the first one), yet you have no idea how that will manifest down the road in terms of who your child is or what your child does. Whether you’re a Grizzly or a Tiger or a Kangaroo or a Panda mother, there is no “enough.”

So, instead of asking you “Are you mom enough?”, in keeping with our goal of bringing women evidence-based science, we’ve looked at some of the research describing what might make a successful parent–child relationship. Yes, the answer is about attachment, but not necessarily of the physical kind. So drop your guilt. Read this when you have time. Meanwhile, do your best to connect with your child, understand your child, and respond appropriately to your child.  

Why? Because that is what attachment is–the basic biological response to a child’s needs. If you’re not a nomad or someone constantly on the move, research suggests that the whole “physically attached to me” thing isn’t really a necessary manifestation of attachment. If you harken to it and your child enjoys it (mine did not) and it works for you without seeming like, well, an albatross around your neck, go for it.

What is attachment?

While attachment as a biological norm among primates has been around as long as primates themselves, humans are more complicated than most primates. We have theories. Attachment theory arose from the observations of a couple of human behaviorists or psychologists (depending on whom you ask), John Bowlby and Mary Ainsworth. Bowlby derived the concept of attachment theory, in which an infant homes in on an attachment figure as a “safe place.” The attachment figure, usually a parent, is the person who responds and is sensitive to the infant’s needs and social overtures. That parent is typically the mother, and disruption of this relationship can have, as most of us probably instinctively know, negative effects.

Bowlby’s early approach involved the mother’s having an understanding of the formational experiences of her own childhood and then translating that to an understanding of her child. He even found that when he talked with parents about their own childhoods in front of their children, the result would be clinical breakthroughs for his patients. As he wrote,

Having once been helped to recognize and recapture the feelings which she herself had as a child and to find that they are accepted tolerantly and understandingly, a mother will become increasingly sympathetic and tolerant toward the same things in her child.

Later studies seem to bear out this observation of a connection to one’s childhood experiences and more connected parenting. For example, mothers who are “insightful” about their children, who seek to understand the motivations of their children’s behavior, positively influence both their own sensitivity and the security of their infant’s attachment to them.  

While Bowlby’s research focused initially on the effects of absolute separation between mother and child, Mary Ainsworth, an eventual colleague of Bowlby, took these ideas of the need for maternal input a step further. Her work suggested to her that young children live in a world of dual and competing urges: to feel safe and to be independent. An attachment figure, a safe person, is for children an anchor that keeps them from become unmoored even as they explore the unknown waters of life. Without that security backing them up, a child can feel always unmoored and directionless, with no one to trust for security.

Although he was considered an anti-Freudian rebel, Bowlby had a penchant for Freudian language like “superego” and referred to the mother as the “psychic organizer.” Yet his conclusions about the mother–child bond resonate with their plain language:

The infant and young child should experience a warm, intimate, and continuous relationship with his mother (or permanent mother substitute) in which both find satisfaction and enjoyment.

You know, normal biological stuff. As a side note, he was intrigued by the fact that social bonds between mother and offspring in some species weren’t necessarily tied to feeding, an observation worth keeping in mind if you have concerns about not being able to breastfeed.

The big shift here in talking about the mother–child relationship was that Bowlby was proposing that this connection wasn’t some Freudian libidinous communion between mother and child but instead a healthy foundation of a trust relationship that could healthily continue into the child’s adulthood.

Ainsworth carried these ideas to specifics, noting in the course of her observations of various groups how valuable a mother’s sensitivity to her child’s behaviors were in establishing attachment. In her most famous study, the “Baltimore study” [PDF], she monitored 26 families with new babies. She found that “maternal responsiveness” in the context of crying, feeding, playing, and reciprocating seemed to have a powerful influence on how much a baby cried in later months, although some later studies dispute specific influences on crying frequencies.

Ainsworth also introduced the “Strange Situation” lab test, which seems to have freaked people out when it first entered the research scene. In this test, over the course of 20 minutes, a one-year-old baby is in a room full toys, first with its mother, then with mother and a strange woman, then with the stranger only (briefly), then with the mother, and then alone before the stranger and then the mother return. The most interesting findings of the study came from when the mother returned after her first absence, having left the baby alone in the room with a stranger. Some babies seemed quite angry, wanting to be with their mothers but expressing unhappiness with her at the same time and physically rejecting her.

From her observations during the Strange Situation, Ainsworth identified three types of attachment. The first was “Secure,” which, as its name implies, suggested an infant secure and comfortable with an attachment figure, a person with whom the infant actively seeks to interact. Then there’s the insecure–avoidant attachment type, in which an infant clearly is not interested in being near or interacting with the attachment figure. Most complex seems to be the insecure–resistant type, and the ambivalence of the term reflects the disconnected behavior the infant shows, seeming to want to be near the attachment figure but also resisting, as some of the unhappy infants described above behaved in the Strange Situation.

Within these types are now embedded various subtypes, including a disorganized–disoriented type in which the infant shows “odd” and chaotic behavior that seems to have no distinct pattern related to the attachment figure.

As you read this, you may be wondering, “What kind of attachment do my child and I have?” If you’re sciencey, you may fleetingly even have pondered conducting your own Strange Situation en famille to see what your child does. I understand the impulse. But let’s read on.

What are the benefits of attachment?

Mothers who are sensitive to their children’s cues and respond in ways that are mutually satisfactory to both parties may be doing their children a lifetime of favors, in addition to the parental benefit of a possibly less-likely-to-cry child. For example, a study of almost 1300 families looked at levels of cortisol, the “stress” hormone, in six-month-old infants and its association with maternal sensitivity to cues and found lower levels in infants who had “more sensitive” mothers.

Our understanding of attachment and its importance to infant development can help in other contexts. We can apply this understanding to, for example, help adolescent mothers establish the “secure” level of attachment with their infants. It’s also possibly useful in helping women who are battling substance abuse to still establish a secure attachment with their children.

On a more individual level, it might help in other ways. For example, if you want your child to show less resistance during “clean-up” activities, establishing “secure attachment” may be your ticket to a better-looking playroom.

More seriously, another study has found that even the way a mother applies sensitivity can be relevant. Using the beautiful-if-technical term ‘dyads’ to refer to the mother–child pair, this study included maternal reports of infant temperament and observations of maternal sensitivity to both infant distress and “non-distress.” Further, the authors assessed the children behaviorally at ages 24 and 36 months for social competence, behavioral problems, and typicality of emotional expression. They found that a mother’s sensitivity to an infant’s distress behaviors was linked to fewer behavioral problems and greater social competence in toddlerhood. Even more intriguing, the child’s temperament played a role: for “temperamentally reactive” infants, a mother’s sensitivity to distress was linked to less dysregulation of the child’s emotional expression in toddlerhood. 

And that takes me to the child, the partner in the “dyad”

You’re not the only person involved in attachment. As these studies frequently note, you are involved in a “dyad.” The other member of that dyad is the child. As much as we’d like to think that we can lock down various aspects of temperament or expression simply by forcing it with our totally excellent attachment skills, the child in your dyad is a person, too, who arrived with a bit of baggage of her own.

And like the study described above, the child’s temperament is a key player in the outcome of the attachment tango. Another study noted that multiple factors influence “attachment quality.” Yes, maternal sensitivity is one, but a child’s native coping behaviors and temperament also seem to be involved. So, there you have it. If you’re feeling like a parental failure, science suggests you can quietly lay at least some of the blame on the Other in your dyad–your child. Or, you could acknowledge that we’re all human and this is just part of our learning experience together.

What does attachment look like, anyway?

Dr. William Sears took the concept of attachment and its association with maternal sensitivity to a child’s cues and security and… wrote a book that literally translated attachment as a physical as well as emotional connection. This extension of attachment–which Sears appends to every aspect of parenting, from pregnancy to feeding to sleeping–has become in the minds of some parents a prescriptive way of doing things with benefits that exclude all other parenting approaches or “philosophies.” It also involves the concept of “baby wearing,” which always brings up strange images in my mind and certainly takes outré fashion to a whole new level. In reality, it’s just a way people have carried babies for a long time in the absence of other easy modes of transport.

When I was pregnant with our first child and still blissfully ignorant about how little control parents have over anything, I read Sears’ book about attachment parenting. Some of it is common-sense, broadly applicable parenting advice: respond to your child’s needs. Some of it is simply downright impossible for some parent–child dyads, and much of it is based on the presumption that human infants in general will benefit from a one-size-fits-all sling of attachment parenting, although interpretations of the starry-eyed faithful emphasize that more than Sears does.

Because much of what Sears wrote resonated with me, we did some chimeric version of attachment parenting–or, we tried. The thing is, as I noted above, the infant has some say in these things as well. Our oldest child, who is autistic, was highly resistant to being physically attached much of the time. He didn’t want to sleep with us past age four months, and he showed little interest in aspects of attachment parenting like “nurturing touch,” which to him was seemingly more akin to “taser touch.” We ultimately had three sons, and in the end, they all preferred to sleep alone, each at an earlier and earlier age. The first two self-weaned before age one because apparently, the distractions of the sensory world around them were far more interesting than the same boring old boob they kept seeing immediately in front of their faces. Our third was unable to breastfeed at all.

So, like all parents do, we punted, in spite of our best laid plans and intentions. Our hybrid of “attachment parenting” could better be translated into “sensitivity parenting,” because our primary focus, as we punted and punted and punted our way through the years, was shifting our responses based on what our children seemed to need and what motivated their behaviors. Thus, while our oldest declined to sleep with us according to the attachment parenting commandment, he got to sleep with a boiled egg because that’s what he wanted. Try to beat that, folks, and sure, bring on the judging.

The Double X Science
Sensitivity Parenting (TM) cheat sheet.

What does “sensitive” mean?

And finally, the nitty-gritty bullet list you’ve been waiting for. If attachment doesn’t mean slinging your child to your body until you’re lumbar gives out or the child receives a high-school diploma, and parenting is, indeed, one compromise after another based on the exigencies of the moment, what consistent tenets can you practice that meet the now 60-year-old concept of “secure” attachment between mother and child, father and child, or mother or father figure and child? We are Double X Science, here to bring you evidence-based information, and that means lists. The below list is an aggregate of various research findings we’ve identified that seem reasonable and reasonably supported. We’ve also provided our usual handy quick guide for parents in a hurry.
  • Plan ahead. We know that life is what happens while you’re planning things, but… life does happen, and plans can at least serve as a loose guide to navigation. So, plan that you will be a parent who is sensitive to your child’s needs and will work to recognize them.
  • Practice emotion detection. Work on that. It doesn’t come easily to everyone because the past is prologue to what we’re capable of in the present. Ask yourself deliberately what your child’s emotion is communicating because behavior is communication. Be the grownup, even if sometimes, the wailing makes you want your mommy. As one study I found notes, “Crying is an aversive behavior.” Yes, maybe it makes you want to cover your ears and run away screaming. But you’re the grownup with the analytical tools at hand to ask “Why” and seek the answer.
  • Have infant-oriented goals. If you tend to orient your goals in your parent–child dyad toward a child-related benefit (relieve distress) rather than toward a parent-oriented goal (fitting your schedule in some way), research suggests that your dyad will be a much calmer and better mutually adapted dyad.
  • Trust yourself and keep trying. If your efforts to read your child’s feelings or respond to your child’s needs don’t work right away, don’t give up, don’t read Time magazine covers, and don’t listen to that little voice in your head saying you’re a bad parent or the voice in other people’s heads screaming that at you. Just keep trying. It’s all any of us can do, and we’re all going to screw this up here and there.
  • Practice behaviors that are supportive of an infant’s sensory needs. For example, positive inputs like a warm voice and smiling are considered more effective than a harsh voice or being physically intrusive. Put yourself in your child’s place and ask, How would that feel? That’s called empathy. 
  • Engage in reciprocation. Imitating back your infant’s voice or faces, or showing joint attention–all forms of joint engagement–are ways of telling an infant or young child that yes, you are the anchor here, the one to trust, and really good time, to boot. Allowing this type of attention to persist as long as the infant chooses rather than shifting away from it quickly is associated with making the child comfortable with independence and learning to regulate behaviors.  
  • Talk to your child. We are generally a chatty species, but we also need to learn to chat. “Rich language input” is important in early child development beginning with that early imitation of your infant’s vocalizations.
Lather, rinse, repeat, adjusting dosage as necessary based on age, weight, developmental status, nanosecond-rate changes in family dynamics and emotional conditions, the teen years, and whether or not you have access to chocolate. See? This stuff is easy.



As you read these lists and about research on attachment, you’ll see words like “secure” and “warm” and “intimate” and “safe.” Are you doing this for your child or doing your best to do it? Then you are, indeed, mom enough, whether you wear your baby or those shoes or both. That doesn’t mean that when you tell other women the specifics of your parenting tactics, they won’t secretly be criticizing you. Sure, we’ll all do that. And then a toddler will cry, we’ll drop it, and move on to mutually compatible things.

Yes, if we’re being honest, it makes most of us feel better to think that somehow, in some way, we’re kicking someone else’s ass in the parenting department. Unfortunately for that lowly human instinct, we’re all parenting unique individuals, and while we may indeed kick ass uniquely for them, our techniques simply won’t extend to all other children. It’s not a war. It’s human… humans raising other humans. Not one thing we do, one philosophy we follow, will guarantee the outcome we intend. We don’t even need science, for once, to tell us that.

By Emily Willingham, DXS managing editor

These views are the opinion of the author and do not necessarily either reflect or disagree with those of the
DXS editorial team. 

Tiptoe through the thalamus…

This is how people looked at the brain in 1673. Things have changed.
Sketch by Thomas Bartholin, 1616-1680. 
Image via Wikimedia Commons. Public domain in USA.
In early October, the Allen Institute for Brain Science dropped a metric buttload of brain data into the public domain.
Founded by Microsoft co-founder Paul Allen, the Allen Institute for Brain Science is, not surprisingly, interested in, um, the brain. Specifically, according to the Institute’s web site, its mission is
“to accelerate the understanding of how the human brain works in health and disease. Using a big science approach, we generate useful public resources, drive technological and analytical advances, and discover fundamental brain properties through integration of experiments modeling and theory.”
Towards that end, researchers at the Allen Institute have been mapping gene expression patterns in the human and mouse brains, as well as neural connectivity in the mouse brain. Why? Well, because as a general rule, science requires a control. If scientists are ever to understand the brain – how we think, how we learn, how we remember things, and how all those processes get scrambled during disease or trauma – they first must understand what a typical baseline brain looks like. The Allen Institute is doing the heavy lifting of mapping out these datasets, one brain slice at a time.
In particular, they are mapping the gene expression and neural connectivity of every part of the brain, so that researchers can identify difference between regions, as well as the physical links that tie them together. Differences in gene expression patterns may reveal, for instance, that seemingly related regions actually have different functions, while connectivity, or brain “wiring,” could shed light on how the brain works. 
I’m a technology nut, so I’m less interested in the answers to these questions than in how we arrive at them. And thanks to the Allen Institute, I (and you) can view these data from the luxury of my very own laptop, no special equipment required. (To be clear, you can’t view the data from my laptop. You’d need my computer, and you can’t have it.) You don’t even need to be a brainiac (I couldn’t help myself) to do it.
Here’s how. Point your browser to From there, choose a dataset – say, “Mouse Connectivity.” This is a dataset of images created by injecting fluorescent tracer molecules into the brains of mice, waiting some period of time, then sacrificing the mice, cutting their brains into thin slices — picture an extremely advanced deli slicer — and taking pictures of each one to see where the tracer material went. The result is a massive collection of images, collected by injecting hundreds of mice, preparing thousands of brain slices, and represents gigabytes upon gigabytes of data, which Allen Institute researchers have then reconstructed into a kind of virtual 3D brain.
In the parlance of neuroscientists, this dataset represents a first-pass attempt at a “connectome” – a brain-wide map of neural connections. But it’s definitely not the last; the connectome is vast beyond reckoning. According to one estimate,
Each human brain contains an estimated 100 billion neurons connected through 100 thousand miles of axons and between a hundred trillion to one quadrillion synaptic connections (there are only an estimated 100–400 billion stars in the Milky Way galaxy).
Efforts are currently underway to map the connectome at a number of levels, from the relatively coarse resolution of diffusion MRI to the subcellular level of electron microscopy. That’s a story for another day, but if you’re interested in this topic, I highly recommend Sebastian Seung’s eminently readable 2012 book, Connectome: How the Brain’s Wiring Makes Us Who We Are.

Back to the Allen Institute datasets. When you click on ‘Mouse Connectivity’, the site presents you with an index of injection sites, 47 in all. Let’s click on “visual areas.” The next page that comes up is a list of datasets that include that region. For the sake of this example, let’s click on the first entry in that list, “Primary visual area,” experiment #100141219.

The resulting page contains 140 fluorescent images of brain tissue slices in shades of orange and green. Click one to see it enlarged. Orange areas are non-fluorescent – they didn’t take up the tracer, meaning they are not physically connected to the injection site. On the bottom of the window is a series of navigation tools – you can tiptoe through the thalamus if you’d like, simply by moving these sliders left-right, up-down, and front-back. Just like a real neuroscientist!

This is your brain (well, a mouse brain) on rAAV (a fluorescent tracer).

You can also zoom in to the cellular level. Here’s a close-up of a densely fluorescent area of the mouse brain — you can actually see individual neurons in this view. 

This is a closeup of your brain on rAAV. (Again, if you were a mouse)

Another option is to download the Allen Institute’s free Brain Explorer software, a standalone program that lets you view these data offline. With Brain Explorer you can “step” through the brain slice by slice, rotate it, highlight regions. It’s way cool, even if (like me) you don’t know very much about brain anatomy.
Here’s a screenshot from the application, showing gene expression data in the center of the brain.

Screenshot of the Allen Institute’s Brain Explorer software

If you’re interested in how the amazing researchers at the Allen Institute are doing this work, they lay it out for you in a nice series of white papers (here’s the one on the mouse connectivity mapping project). I recommend you take a look!
The opinions expressed in this post do not necessarily reflect or conflict with those of the DXS editorial team or contributors.

Why is the sky pink?

On Mars, the sky is pink during the day, shading to blue at sunset. What planet did you think I was talking about?

On Earth, the sky is blue during daytime, turning red at as the sun sinks toward night.

Scattering light

Well, it’s not quite as simple as that: if you ignore your dear sainted mother’s warning and look at the Sun, you’ll see that the sky immediately around the Sun is white, and the sky right at the horizon (if you live in a place where you can get an unobstructed view) is much paler. In between the Sun and the horizon, the sky gradually changes hue, as well as varying through the day. That’s a good clue to help us answer the question every child has asked: why is the sky blue? Or as a Martian child might ask: why is the sky pink?

First of all, light isn’t being absorbed. If you wear a blue shirt, that means the dye in the cotton (or whatever it’s made of) absorbs other colors in light, so only blue is reflected back to your eye. That’s not what’s happening in the air! Instead, light is being bounced off air molecules, a process known as scattering. Air on Earth is about 80% nitrogen, with almost all of the rest being oxygen, so those are the main molecules for us to think about.

As I discussed in my earlier article on fluorescent lights, atoms and molecules can only absorb light of certain colors, based on the laws of quantum mechanics. While oxygen and nitrogen do absorb some of the colors in sunlight, they turn right around and re-emit that light. (I’m oversimplifying slightly, but the main thing is that photons aren’t lost to the world!) However, other colors don’t just pass through atoms as though they aren’t there: they can still interact, and the way we determine how that happens is again the color.

The color of light is determined by its wavelength: how far a wave travels before it repeats itself. Wavelength is also connected to energy: short wavelengths (blue and violet light) have high energy, while long wavelengths (red light) have lower energy. When a photon (a particle of light) hits a nitrogen or oxygen molecule, it might hit one of the electrons inside the molecule. Unless the wavelength is exactly right, the photon doesn’t get absorbed and the electron doesn’t move, so all the photon can do is bounce off, like a pool ball off the rail on a billiards table. Low-energy red photons don’t change direction much after bouncing–they hit the electron too gently for that. Higher-energy blue and violet photons, on the other hand, scatter by quite a bit: they end up moving in a very different direction after hitting an electron than they moving before. This whole process is known technically as Rayleigh scattering, for the physicist John Strutt, Lord Rayleigh.

The blue color of the sky

Not every photon will hit a molecule as it passes through the atmosphere, and light from the Sun contains all the colors mixed together into white light. That means if you look directly at the Sun or the sky right around the Sun during broad daylight, what you see is mostly unscattered light, the photons that pass through the air unmolested, making both Sun and sky look white. (By the way, your body is pretty good at making sure you won’t damage your vision: your reflexes will usually twitch your eyes away before any injury happens. I still don’t recommend looking at the Sun directly for any length of time, especially with sunglasses, which can fool your reflexes into thinking everything is safer than it really is.) In other parts of the sky away from the Sun, scattering is going to be more significant.

The Sun is a long way away, so unlike a light bulb in a house, the light we get from it comes in parallel beams. If you look at a part of the sky away from the Sun, in other words, you’re seeing scattered light! Red light doesn’t get scattered much, so not much of that comes to you, but blue light does, meaning the sky appears blue to our eyes. Bingo! Since there is some green and other colors mixed in as well, the apparent color of the sky is more a blue-white than a pure blue.

(The Sun’s light doesn’t contain as much violet light as it does blue or red, so we won’t see a purple sky. It also helps that our eyes don’t respond strongly to violet light. The cone cells in our retinas are tuned to respond to blue, green, and red, so the other colors are perceived by triggering combinations of the primary cone cells.)

At sunset, light is traveling through a lot more air than it does at noon. That means every ray of light has more of a chance to scatter, removing the blue light before it reaches our eyes. What’s left is red light, making the sky at the horizon near the Sun appear red. In fact, you see more gradations of color too: moving your vision higher in the sky, you’ll note red shades into orange into yellow and so forth, but each color is less intense.

So finally: why is the Martian sky pink? The answer is dust: the surface of Mars is covered in a fine powder, more like talcum than sand. During the frequent windstorms that sweep across the planet, this dust is blown high into the air, where light (yes) scatters off of it. Since the grains are larger than air molecules, the kind of scattering is different, and tends to make the light appear red. (Actually, the sky’s “true” color is very hard to determine, since there is a lot more variation than on Earth.) When there is less dust in the atmosphere, the Martian sky is a deep blue, when the Sun’s light scatters off the carbon dioxide molecules in the air.

By DXS Physics Editor Matthew Francis

Welcome to the 21st century and welcome to MARS

Parachutes and SKY CRANES!
Image credit: NASA. Woohoo, NASA!
by Emily Willingham, DXS managing editor, who totally stayed up to watch all of this unfold

Update: Check out below what you see in the above graphic, except that it’s a real image of the real rover with its real parachute, heading for the surface of Mars! Image by way of the Bad Astronomer, a.k.a., Phil Plait. 

Because we are freaky, geeky, and totally tweaky excited about the Mars Curiosity landing (woohoo!), today we bring you a links roundup related to this event. For some perspective–my own–I was born the year before the first people walked on the moon, an event known as the Moon Landing. That day was such a big deal that in a photo book of baby images capturing my first year, six dim Polaroid photos of the moon landing take up the entire last page, fuzzy, blurry images of our ancient Zenith television, including one of an Earth-bound Walter Cronkite (I still miss that man) wiping his face in disbelief. As someone who was born in the mid-20th century and knew and lived with people born in the 1800s, I am in awe of what I’m seeing today in the second decade of the 21st century.

You can relive that moment from 43 years ago in the video below. You might even recognize the real-life versions of some of the characters who featured in Apollo 13, one of my favorite movies. I also am a fan of Janet Armstrong’s hair in this video. The entire clip sequence is typical ’60s news television and features a strikingly young Mike Wallace. Armstrong is on the moon at around 9:39. “One small step for man… “

The moon is a mere 238,900 miles away from us. We could practically fly there on a space plane (assumes this biologist). But Mars? That’s 350 million miles. The rover we just dropped on the planet, using technology with shades of the latest Star Trek movie, will spend a planned two years roving the red planet, sending back data about what it finds. The Great Hope, of course, is that one thing it will find is signs of Life.

Now, enjoy this video of the successful Curiosity landing from the wee hours this morning. “Thumbnails complete! We’ve got thumbnails! Woohooooo!” My favorite quote: “You can see dust particles on the window!” 

Then, visit NASA’s page dedicated to the Rover Curiosity, where NASA’s posting great images from 350 million miles away.  

Our own physics editor, Matthew Francis, has a post up over at his blog, Galileo’s Pendulum, giving a personal perspective on this historic event. He’s also included links to a post by Emily Lakdawalla telling us what comes next for Curiosity and to ArsTechnica’s retrospective overview of Mars missions

The L.A. Times, near ground zero of mission control, has a lengthy piece complete with links to photo essays. Worth exploring and enjoying. 

Finally, just follow Mars Curiosity itself @MarsCuriosity (natch) and follow along at the related hashtags:

A couple of these are currently even trending on Twitter, which gives me hope for science and humanity. In that spirit, I leave you with a screenshot of this tweet from Story Collider’s Ben Lillie:

Science, FTW! We sure have come a long way since 1969, baby.

Don’t worry so much about being the right type of science role model

Role models: How do they look? (Source)
[Today we have a wonderful guest post from Marie-Claire Shanahan, continuing the conversation about what makes someone a good role model in science. This post first appeared at Shanahan's science education blog, Boundary Vision, and she has graciously agreed to let us share it here, too. Shanahan is an Associate Professor of Science Education and Science Communication at the University of Alberta where she researches social aspects of science such as how and why students decide to pursue science degrees. She teaches courses in science teaching methods, scientific language and sociology of science. Marie-Claire is also a former middle and high school science and math teacher and was thrilled last week when one of her past sixth grade students emailed to ask for advice on becoming a science teacher. She blogs regularly about science education at Boundary Vision and about her love of science and music at The Finch & Pea.]

What does it mean to be a good role model? Am I a good role model? Playing around with kids at home or in the middle of a science classroom, adults often ask themselves these questions, especially when it come to girls and science. But despite having asked them many times myself, I don’t think they’re the right questions.

Studying how role models influence students shows a process that is much more complicated than it first seems. In some studies, when female students interact with more female professors and peers in science, their own self-concepts in science can be improved [1]. Others studies show that the number of female science teachers  at their school seems to have no effect [2].

Finding just the right type of role model is even more challenging. Do role models have to be female? Do they have to be of the same race as the students? There is often an assumption that even images and stories can change students’ minds about who can do science. If so, does it help to show very feminine women with interests in science like the science cheerleaders? The answer in most of these studies is, almost predictably, yes and no.

Diana Betz and Denise Sekaquaptewa’s recent study “My Fair Physicist: Feminine Math and Science role models demotivate young girls” seems to muddy the waters even further, suggesting that overly feminine role models might actually have a negative effect on students. [3] The study caught my eye when PhD student Sara Callori wrote about it and shared that it made her worry about her own efforts to be a good role model.

Betz and Sekaquaptewa worked with two groups of middle school girls. With the first group (144 girls, mostly 11 and 12 years old) they first asked the girls for their three favourite school subjects and categorized any who said science or math as STEM-identified (STEM: Science, Technology, Engineering and Math). All of the girls then read articles about three role models. Some were science/math role models and some were general role models (i.e., described as generally successful students). 

The researchers mixed things even further so that some of the role models were purposefully feminine (e.g., shown wearing pink and saying they were interested in fashion magazines) and others were supposedly neutral (e.g., shown wearing dark colours and glasses and enjoying reading).* There were feminine and neutral examples for both STEM and non-STEM role models. After the girls read the three articles, the researchers asked them about their future plans to study math and their current perceptions of their abilities and interest in math.**

For the  most part, the results were as expected. The STEM-identified girls showed more interest in studying math in the future (not really a surprise since they’d already said math and science were their favourite subjects) and the role models didn’t seem to have any effect. Their minds were, for the most part, already made up.

What about the non-STEM identified girls, did the role models help them? It’s hard to tell exactly because the researchers didn’t measure the girls’ desire to study math before reading about the role models.  It seems though that reading about feminine science role models took away from their desire to study math both in the present and the future. Those who were non-STEM identified and read about feminine STEM role models rated their interest significantly lower than other non-STEM identified girls who read about neutral STEM role models and about non-STEM role models. A little bit surprising was the additional finding that the feminine role models also seemed to lower STEM-identified girls current interest in math (though not their future interest).

The authors argue that the issue is unattainability. Other studies have shown that role models can sometimes be intimidating. They can actually turn students off if they seem too successful, such that their career or life paths seem out of reach, or if students can write them off as being much more talented or lucky than themselves. Betz and Sekaquaptewa suggest that the femininity of the role models made them seem doubly successful and therefore even more out of the students’ reach.

The second part of the study was designed to answer this question but is much weaker in design so it’s difficult to say what it adds to the discussion. They used a similar design but with only the STEM role models, feminine and non-feminine (and only 42 students, 20% of whom didn’t receive part of the questionnaire due to an error). The only difference was instead of asking about students interest in studying math they tried to look at the combination of femininity and math success by asking two questions:

  1. “How likely do you think it is that you could be both as successful in math/science AND as feminine or girly as these students by the end of high school?” (p. 5)
  2. “Do being good at math and being girly go together?” (p. 5)

Honestly, it’s at this point that the study loses me. The first question has serious validity issues (and nowhere in the study is the validity of the outcome measures established). First, there are different ways to interpret the question and for students to decide on a rating. A low rating could mean a student doesn’t think they’ll succeed in science even if they really want to. A low rating could also mean that a student has no interest in femininity and rejects the very idea of being successful at both. These are very different things and make the results almost impossible to interpret. 

Second these “successes” are likely different in kind. Succeeding in academics is time dependent and it makes sense to ask young students if they aspire to be successful in science. Feminine identity is less future oriented and more likely to be seen as a trait rather a skill that is developed. It probably doesn’t make sense to ask students if they aspire to be more feminine, especially when femininity has been defined as liking fashion magazines and wearing pink.

Question: Dear student, do you aspire to grow up to wear more pink? 

Answer (regardless of femininity): Um, that’s a weird question.

With these questions, they found that non-STEM identified girls rated themselves as unlikely to match the dual success of the feminine STEM role models. Because of the problems with the items though, it’s difficult to say what that means. The authors do raise an interesting question about unattainability, though, and I hope they’ll continue to look for ways to explore it further.

So, should graduate students like Sara Callori be worried? Like lots of researchers who care deeply about science, Sara expressed a commendable and strong desire to make a contribution to inspiring young women in physics (a field that continues to have a serious gender imbalance). She writes about her desire to encourage young students and be a good role model:

When I made the decision to go into graduate school for physics, however, my outlook changed. I wanted to be someone who bucked the stereotype: a fashionable, fun, young woman who also is a successful physicist. I thought that if I didn’t look like the stereotypical physicist, I could be someone that was a role model to younger students by demonstrating an alternative to the stereotype of who can be a scientist. …This study also unsettled me on a personal level. I’ve long desired to be a role model to younger students. I enjoy sharing the excitement of physics, especially with those who might be turned away from the subject because of stereotypes or negative perceptions. I always thought that by being outgoing, fun, and yes, feminine would enable me to reach students who see physics as the domain of old white men. These results have me questioning myself, which can only hurt my outreach efforts by making me more self conscious about them. They make me wonder if I have to be disingenuous about who I am in order to avoid being seen as “too feminine” for physics.

To everyone who has felt this way, my strong answer is: NO, please don’t let this dissuade you from outreach efforts. Despite results like this, when studies look at the impact of role models in comparison to other influences, relationships always win over symbols. The role models that make a difference are not the people that kids read about in magazines or that visit their classes for a short period of time. The role models, really mentors, that matter are people in students’ lives: teachers, parents, peers, neighbours, camp leaders, and class volunteers. And for the most part it doesn’t depend on their gender or even their educational success. What matters is how they interact with and support the students. 
Good role models are there for students, they believe in their abilities and help them explore their own interests.

My advice? Don’t worry about how feminine or masculine you are or if you have the right characteristics to be a role model, just get out there and get to know the kids you want to encourage. Think about what you can do to build their self-confidence in science or to help them find a topic they are passionate about. When it comes to making the most of the interactions you have with science students, there are a few tips for success (and none of them hinge on wearing or not wearing pink):

§   Be supportive and encouraging of students’ interest in science. Take their ideas and aspirations seriously and let them know that you believe in them. This turns out to be by far one of the most powerful influences in people pursuing science. If you do one thing in your interactions with students, make it this.

§  Share with students why you love doing science. What are the benefits of being a scientist such as contributing to improving people’s lives or in solving difficult problems? Students often desire careers that meet these characteristics of personal satisfaction but don’t always realize that being a scientist can be like that.

§  Don’t hide the fact that there are gender differences in participation in some areas of science (especially physics and engineering). Talk honestly with students about it, being sure to emphasize that differences in ability are NOT the reason for the discrepancies. Talk, for example, about evidence that girls are not given as many opportunities to explore and play with mechanical objects and ask them for their ideas about why some people choose these sciences and others don’t.
There are so many ways to encourage and support students in science, don’t waste time worrying about being the perfect role model. If you’re genuinely interested in taking time to connect with students, you are already the right type.

* There are of course immediate questions about how well supported these are as feminine characteristics but I’m willing to allow the researchers that they could probably only choose a few characteristics and had to try to find things that would seem immediately feminine to 11-12 year olds. I still think it’s a shallow treatment of femininity, one that disregards differences in cultural and class definitions of femininity. (And I may or may not still be trying to sort out my feelings about being their gender neutral stereotype, says she wearing grey with large frame glasses and a stack of books beside her).

**The researchers unfortunately did not distinguish between science and math, using them interchangeably despite large differences in gender representation and connections to femininity between biological sciences, physical sciences, math and various branches of engineering.

[1] Stout, J. G., Dasgupta, N., Hunsinger, M., & McManus, M. A. (2011). STEMing the tide: Using ingroup experts to inoculate women’s self-concept in science, technology, engineering, and mathematics (STEM).Journal of Personality and Social Psychology, 100, 255-270.

[2] Gilmartin, S., Denson, N., Li, E., Bryant, A., & Aschbacher, P. (2007). Gender ratios in high school science departments: The effect of percent female faculty on multiple dimensions of students’ science identities.Journal of Research in Science Teaching, 44, 980–1009.

[3] Betz, D., & Sekaquaptewa, D. (2012). My Fair Physicist? Feminine Math and Science Role Models Demotivate Young Girls Social Psychological and Personality Science DOI: 10.1177/1948550612440735

Further Reading

Buck, G. A., Leslie-Pelecky, D., & Kirby, S. K. (2002). Bringing female scientists into the elementary classroom: Confronting the strength of elementary students’ stereotypical images of scientists. Journal of Elementary Science Education, 14(2), 1-9.

Buck, G. A., Plano Clark, V. L., Leslie-Pelecky, D., Lu, Y., & Cerda-Lizarraga, P. (2008). Examining the cognitive processes used by adolescent girls and women scientists in identifying science role models: A feminist approach. Science Education, 92, 2–20.

Cleaves, A. (2005). The formation of science choices in secondary school.International Journal of Science Education, 27, 471–486.

Ratelle, C.F., Larose, S., Guay, F., & Senecal, C. (2005). Perceptions of parental involvement and support as predictors of college students’ persistence in a science curriculum. Journal of Family Psychology, 19, 286–293.

Simpkins, S. D., Davis-Kean, P. E., & Eccles, J. S. (2006). Math and science motivation: A longitudinal examination of the links between choices and beliefs. Developmental Psychology, 42, 70–83.

Stout, J. G., Dasgupta, N., Hunsinger, M., & McManus, M. (2011). STEMing the tide: Using ingroup experts to inoculate women’s self-concept and professional goals in science, technology, engineering, and mathematics (STEM). Journal of Personality and Social Psychology, 100,255–270.

Two Science Online 2012 sessions for your consideration

Tomorrow, I head for North Carolina to attend Science Online 2012. I attended last year as an an information sponge and observer who knew no one and experienced some highlights and lowlights. This year, I’m attending as a participant and as a moderator of two sessions. The first session, on Thursday afternoon, is with Deborah Blum, and we’ll be leading a discussion about how and when to include basic science in health and medical writing without distracting the reader. The second session I’m moderating is with Maia Szalavitz, and we’ll be talking about whether or not it’s possible to write in health and medicine as an advocate and still be even-handed. Session descriptions are below, as are the topics that we’ll be tossing around for discussion.

Thursday, 2:45 p.m.: The basic science behind the medical research: Where to find it, how and when to use it. 

Sometimes, a medical story makes no sense without the context of the basic science–the molecules, cells, and processes that led to the medical results. At other times, inclusion of the basic science can simply enhance the story. How can science writers, especially without specific training in science, find, understand, and explain that context? As important, when should they use it? The answers to the second question can depend on publishing context, intent, and word count. This session will involve moderators with experience incorporating basic science information into medically based pieces with their insights into the whens and whys of using it. The session will also include specific examples of what the moderators and audience have found works and doesn’t work from their own writing.

Deborah and I have been talking about some issues we’d like to raise for discussion. The possibilities are expansive. Some highlights:

  • Scientific explanation (and understanding) is the foundation for the best science writing. In fact, if the writer doesn’t understand the science, he or she may miss the most important part of the story. But we worry that pausing to explain can slow a story down or disrupt the flow. In print, writers deal with this by condensing and simplifying explanations and also by trying to make them lively and vivid, such as by use of analogy. But online, we use hotlinks as often if not more often for the same purpose. 
  • Reaching a balance between links and prose can be a difficult task. Another possible pitfall is writing an explanation that’s more about teaching ourselves than it is about informing a reader sufficiently for story comprehension. How many writers run into that problem?
  • On-line the temptation is to do the barest explanation and the link to the fuller account, but that approach has pros and cons. More information is available to the reader and the sourcing is transparent. But how often do readers follow those links – and how often do they return? Issues with links include that they are not necessarily evergreen or can lose reader (can be exit portal), or that the reader may not use them at all, thus losing some of the story’s relevant information.
  • A reader may actually learn more from a print story where there are no built-in escape clauses. So how does the on-line science writer best construct a story that illuminates the subject? Are readers learning as much for our work as they do from a print version? (And there’s that age-old question of, Are we here to teach or to inform?)
  • Are we diminishing our own craft if we use links to let others tell the story for us? If we simply link out rather than working to supply an accessible explanation, negatives could include not pushing ourselves as writers and not expanding our own knowledge base, both essential to our craft.
  • How much do we actually owe our readers here? How much work should we expect them to do?
  • What are some ways to address issues of flow, balance, clarity? One possibility is, of course, expert quotes. Twitter is buzzing with scientists, many of whom likely would be pleased to explain a concept or brainstorm about it. (I’ve helped people who have “crowdsourced” in this way for a story, just providing an understandable, basic explanation for something complex).
  • Deborah and I are considering a challenge for the audience with a couple of basic science descriptives, to define them for a non-expert audience without using typical hackneyed phrases. Ideas for this challenge are welcome.
  • We also will feature some examples from our own work in which we think we bollixed up something in trying to explain it (overexplained or did it more for our own understanding than the reader’s) and examples from our own or others’ work of good accessible writing explaining a basic concept. We particularly want to show some explanations of quite complicated concepts–some that worked, some that didn’t. Suggestions for these are welcome!
  • Finally, when we do use links in our online writing– what consitutes a quality link?
Saturday, 10:45 a.m.: Advocacy in medical blogging/communication. Can you be an advocate and still be fair?
There is already a session on how reporting facts on controversial topics can lead to accusations of advocacy. But what if you *are* an avowed advocate in a medical context, either as a person with a specific condition (autism, multiple sclerosis, cancer, heart disease) or an ally? How can you, as a self-advocate or ally of an advocate, still retain credibility–and for what audience?

The genesis of this session was my experience in the autism community. I’m an advocate of neurodiversity, the basic premise of which is that people of all neurologies have potential that should be sought, emphasized, and nurtured over their disabilities. Maia, the co-moderator of our session, has her own story of advocacy to tell as a writer about pain, pain medication, mental health, and addiction. 

Either of these topics is controversial, and when you’ve put yourself forward as an advocate, how can you also present as a trustworthy voice on the subject? Maia and I will lead a discussion that will hit, among other things, on the following topics that we hope will lead to a vigorous exchange and input from people whose advocacy is in other arenas:

  • Can stating facts or scientific findings themselves lead to a perception of advocacy? Maia’s experience is, for example, about observing that heroin doesn’t addict everyone who tries it. My example is about noting the facts from research studies that have identified no autism-vaccine link.
  • Any time either of us talks about vaccines or medications for mental health, we’ve run into accusations of being a “Big Pharma tool” or with worse terminology. What response do such accusations require, and what constitutes a conflict of interest here? What is the level of corruption of data that’s linked to pharma involvement? If they are the only possible source of funding for particular studies…do we ignore their data completely?
  • We both agree that having an advocacy bias seems to strengthen our skeptical thinking skills, that it leads us to dig into data with an attitude of looking for facts and going beyond the conventional wisdom in a way that someone less invested might not do. Would audience members agree?
  • In keeping with that, are advocates in fact in some ways more willing to acknowledge complexities and grey areas rather than reducing every situation to black and white?
  • We also want to talk about how the passion of advocacy can lead to a level of expertise that may not be as easily obtained without some bias.
  • That said, another issue that then arises is, How do you grapple with confirmation bias? We argue that you have to consciously be ready to shift angle and conclusions when new information drives you that way–just as a scientist should.
  • One issue that has come to the forefront lately is the idea of false equivalence in reporting. Does being an advocate lead to less introduction of false equivalence?
  • We argue that you may not be objective but that you can still be fair–and welcome discussion about that assertion.
  • And as Deborah and I are doing, we’re planning a couple of challenge questions for discussants to get things moving and to produce some examples of our own when we let our bias interfere too much and when we felt that we remained fair.

The entire conference agenda looks so delicious, so full of moderators and session leaders whom I admire, people I know will have insights and new viewpoints for me. The sheer expanse of choice has left me as-yet unable to select for myself which sessions I will attend. If you’re in the planning stages and see something you like for either of these sessions, please join us and…bring your discussion ideas! 

See you in NC.