Biology Explainer: The big 4 building blocks of life–carbohydrates, fats, proteins, and nucleic acids

The short version
  • The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
  • Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
  • Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
  • Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.                                                                                                      
  • The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.

                                                  

Big Molecules with Small Building Blocks

The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.

We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.
Carbohydrates

You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.

When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.

Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.

The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.

Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.

On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.

The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!

If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.

The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?

If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.

In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.

Sugar and Fuel

A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.

Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.

Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.

Polysaccharides: Fuel and Form

Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.

Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.

Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.

Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.

The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.

Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.

The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.

That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.

These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.

Lipids: The Fatty Trifecta

Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.

Fats: the Good, the Bad, the Neutral

Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?

Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows.  Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.

Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.

Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.

Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.

The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.

You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.

In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.

A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.

Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.

Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.

Phospholipids: An Abundant Fat

You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.

Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.

There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.

Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.

The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.

Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.

Steroids: Here to Pump You Up?

Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.

But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.

Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.

Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.

Proteins

As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.

Levels of Structure

Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.

For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.

This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.

Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.

The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.

In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.

A Plethora of Purposes

What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.

As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.

Nucleic Acids

How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.

Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.

DNA vs. RNA: A Matter of Structure

DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.

So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.

RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.

DNA vs. RNA: Function Wars

An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.

These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.

RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.


 By Emily Willingham, DXS managing editor 
This material originally appeared in similar form in Emily Willingham’s Complete Idiot’s Guide to College Biology

Crowdfunding on the Brain: Finding Biomarkers for Early Autism Diagnosis

By Biology Editor, Jeanne Garbarino


If a child is diagnosed with autism spectrum disorder (ASD), it is because they have gone through a number of rigorous behavioral tests, often over a period of time, and never straightforward. Of course, this time can be a stressful for parents or caregivers, and sometimes the answers can lead to even more questions. One solution to the waiting and uncertainty would be to have a medical test that could more easily diagnose ASD. However, no one has been able to identify biomarkers – molecules in the body that can help define a specific medical condition – for the condition. Without this type of information, it is not possible to create a diagnostic test for autism.


Having been through this process with their son, who is on the autism spectrum, Clarkson University scientists Costel Darie and Alisa Woods have decided to work together to help address this issue. An interdisciplinary laboratory that combines hardcore proteomics (the study of the proteins we make) with cognitive neuroscience is probably not what you think of when it comes to running a family business. But for Darie and Woods, “marriage” has many meanings. This husband and wife team has combined their brainpower to embark on a scientific journey toward understanding some of the biochemistry behind autism, and they are walking on an increasingly popular path to help finance their work: crowdfunding.


A major goal of the Darie Lab is to identify biomarkers that are associated with autism and then to create a medical test to help alleviate some of the frustrations that come with the ASD diagnostic process. Using a technology called high-definition mass spectrometry, the Darie Lab has outlined a project to figure out the types of proteins that are in the saliva or blood of children with ASD and compare these protein profiles to the saliva or blood from children who are not on the autism spectrum. If the Darie Lab is successful, they might be able to help create a diagnostic test for early autism detection, which would undoubtedly fill a giant void in the field of autism research and treatment.


Here is how the experiment will work: The members of the Darie Lab will collect saliva (and/or blood) samples from children, half of whom are on the autism spectrum and half of whom are not. The researchers will prepare the saliva or blood and collect the proteins. Each protein will be analyzed by a high definition mass spectrometer, which is basically a small scale for measuring the weight and charge of a protein. The high definition mass spectrometer will transfer information about the proteins to a computer, with special software allowing the Darie Lab investigators to figure out the exact makeup of proteins in each sample.


The bottleneck when it comes to these experiments is not getting samples (saliva and blood are easy to collect), and it isn’t the high-tech high-definition mass spectrometer because they have access to one.  Rather, the bottleneck comes from the very high cost of the analytical software they need. Because this software was not included in their annual laboratory budget but is critical to conducting this experiment, the Darie Lab is raising money through crowdfunding.


Why I think a contribution is worth the investment: Technology is always advancing, especially when it comes to protein biochemistry. The high-definition mass spectrometer is a recent technology, and according to the Darie Lab, they have been able to identify over 700 proteins in the saliva alone. This is quite an incredible step up from traditional mass spectrometers, which could detect only around 100 proteins in saliva. Just because we haven’t been able to identify biomarkers for autism in the past doesn’t mean we can’t do it now. 

In addition to the use of this new technology, the Darie Lab presents some compelling preliminary evidence for a difference in protein profiles between those with ASD and those who do not have ASD. While they’ve examined only three autistic people and compared them to three non-ASD individuals, the two groups were clearly distinct in their saliva protein profiles. If this pattern holds up with an increased number of study participants, the implications could be quite significant for autism research.      
Preliminary data from the Darie Lab shows that there are saliva proteins showing a 20X or greater
difference  between ASD (ovals) versus sibling non-ASD controls (rectangles).

If you decide to kick in some funds, your good deed will not go unrewarded. As a thank-you for contributing, the Darie Lab has offered up a few cool perks, including high-quality prints of microscopic images in the brain. 



If you are looking for a good cause, look no further. I am excited to see how the Darie Lab crowdfund experience goes, and I wish them all the best in their quest, both as professionals and as parents.  To find out more, or to make a donation, visit the Darie Lab RocketHub page.

Fluorescent images of the brain, available to those donating $100 or more.
The opinions expressed in this post do not necessarily agree or conflict with those of the DXS editorial team and contributors.

No gene is an island: What do scientists mean when they talk about environment and genes?

Nope. This island does not represent your genes. (Source)

When you read news stories about what affects a developing human in the womb or how cancer or obesity arises, you probably also see references to genes and environment. Some articles may focus on genes versus environment, or mention that something is “mostly” genetic or that the “environment” contributes to a disorder or trait in some way.

What some people may not realize is that “environment” to a scientist talking about genetics may be something very different from “environment” to a non-scientist reading a news article. While a scientist may be vividly imagining a bustling microenvironment of native molecules in the way only scientists seem to do, the general reader may simply be thinking about “toxins” or “chemicals.” That’s why Double X Science is here to help with a primer on what those scientist types may mean when they talk about genes and environment. See how useful we are? Tell your friends! (Speaking of environmental influences… ).

Where does environment begin and end? Let’s begin at the end
No gene is an island. Your genes consist in part of a special code that is really an instruction manual. Your cells rely on internal translators to decode these instructions and use them as a guide to make various proteins, the molecules that give your cells, tissues, organs, organ systems, and you much of their structure and function. Proteins do thousands of jobs, from breaking down food to building and replacing tissues (news release) to governing cell division. Most of your cells are engaged in making proteins, a complex, exquisitely regulated and multi-step process. But they don’t do it in a vacuum. 

That code the cell uses to build the protein? That instruction manual is susceptible to all kinds of interference. Pages get torn out or folded over or stuck together. The words of the code can be changed, sometimes subtly, sometimes unmistakably, and all kinds of factors can jumble up those words so that cell ends up making a protein that isn’t quite what was intended. It’s even possible to use the cellular version of Liquid Paper(TM) to mask the code so that the cell doesn’t recognize its existence. Sometimes, these changes have no observable effect. Sometimes, they have big bad effects, such as disease, or helpful outcomes, such as disease resistance.

That code sits in a cell in a body (you) made of trillions of cells doing hundreds of different jobs, taking in things from the environment, playing host to millions of other organisms (themselves an environment), altering and shifting with every passing second as the whole system works to keep you together and functioning within certain acceptable limits for human life. All of these processes can influence the code, leading the cell to use it, change it, use only certain parts of it, Liquid Paper over it, tweak what results from its instructions, or just ignore it. It’s impossible for any code in that situation to function in the total absence of influence from its environment, in part because the code itself is just the beginning. Much of the environment’s influence is reflected in what the cell does with the instructions, not just what the instructions say. 

This multitude of environmental influences is one reason that even people with identical genetic codes can have differences in diseases we think of as being largely genetic. No gene–no code–is an island. You are not your genes. You are your genes and your environment.

No nucleus is an island. Most of our genes are packaged neatly with the rest of our DNA around molecular spools inside a cellular vault called the nucleus. This vault is a choosy sentry, letting in only certain molecules carrying proper ID. Yet inside the nucleus, there is an environment. This environment is not “toxins” or “chemicals,” the things that many people probably think of when someone says “environment” and talks about genes. But it is a busy place with its own milieu. Some parts of the code are in use, some sit quiet, and many molecules bustle and hustle to maintain, copy, process, or protect these important instructions. Every little bit of this hustle and bustle can influence some aspect of what happens to a code in the nucleus, interfering with or enhancing its use or resulting in accidental changes that may have big effects further down the line. The nucleus is the final stop in the chain of environmental influence, wherever that influence may originate.

No cell is an island. Outside of that vault is the big, wide world of the cell. The cell is the molecular version of a busy metropolis (see beautiful video, The Inner Life of the Cell, below), a complex system of cellular highways that the cell uses to deliver packages internally, take in deliveries from the outside world, and transfer the millions of molecules it’s using and making to the right places at the right time. There’s a generator, a recycling center, guards at the gate, and a protein production facility and processing plant, complete with a post office. And that cell sits in an environment, usually, of many many other cells, also busy with their duties. What happens outside of that cell affects the inside of the cell, altering traffic flows, protein production and packaging, signaling and delivery along the routes, and, ultimately, processes inside the vault called the nucleus, the final destination in the chain of environmental effects. From outside the cell, through the cell, and to the nucleus, every step along the way is one that environment can affect, all the way down to what the cell does with its genes–the codes–for the proteins it makes.



No tissue or organ is an island. A lot of cells working together to do the same thing in your body make up a tissue. Tissues combined together to perform a function are an organ. Let’s take the organ named after living, the liver. It keeps you alive by filtering your blood and reconstructing substances that might harm your cells into less-harmful compounds. Just about everything you ingest gets passed through here. When the liver takes up something like ethanol, the alcohol we ingest at wine o’ clock, and gets to work making it less awful for your body, guess what does that work? The cells that make up the liver. The liver’s environment is their environment is each individual cell’s environment, and eventually, the influence will pass to the nucleus, the final destination in the chain of environmental influence, where the code lies.

You are not an island. And whatever you encounter in this world may well influence you right down to the level of your genes. But while many people might think of “toxins” or “chemicals” when they think of environmental influences on genes, your chemical exposures–and chemicals include oxygen, water, body fluids, nutrients and not-so-nutrients in your foods, medications you may take–are among many, many examples of environmental factors that may reach via a chain reaction all the way to your genes. Some of these factors affect your genes by way of your sensory system: A hug, an angry encounter, a sick child, a laugh with a friend–you respond to each of these environmental influences, often by way of hormones that have a chat with your cells. Your cells respond by adjusting how they use the code in the nucleus so that in the face of anger or love or worry, your body still functions within the essential parameters of life. Below, we list with tongue slightly in cheek a sampling of other factors that constitute an “environment” that could influence your genes and how your cell uses them and the proteins they encode. Whether you know it or not, you’re encountering a million factors every day, big and small, that may trigger some effect way down there in the nuclear vaults of your cells, one that reverberates body wide.

Some examples of “environment” that might influence genes
Environmental influence on genes and how your cells use their instructions and the resulting proteins can come from almost anywhere, any factor, from outside of you and within you. It’s not just about exposures to “bad” chemicals or “toxins.” While the list of potential environmental factors influencing genes and how the cell uses them is practically infinite, we give you just a few examples for thought below:

  • Your parents, siblings, friends, extended family, co-workers, soccer team–you know, other people
  • Infections
  • The billions of microbes that live on you and in you
  • Lifestyle factors like diet, exercise, sleep, stress
  • A dusty house
  • A clean house
  • Hormones, from inside and out
  • Age
  • Sex
  • School
  • Pets
  • Hugs
  • Isolation
  • Crowding
  • Talking
  • Supplements
  • The womb and factors therein
  • Playing outside
  • Playing inside
  • Having sex
  • Abstaining from sex
  • Your job
  • Yogurt?
  • Puberty
  • Other genes
  • Learning things
  • Not learning things
  • Minecraft
  • Mozart
  • Birth order
  • Watching sports
  • Playing sports
  • Sitting a lot
  • Standing a lot
  • Twitter
  • The Sun (and just about everything under it)

You get the idea.

By Emily Willingham, DXS managing editor


These views are the opinion of the author and do not necessarily either reflect or disagree with those of the DXS editorial team.

An antibody therapy for hemophilia A?

Example of an antibody. The interesting bits are purple,
as so many interesting things are.
Image credit and license info, via Wikimedia Commons.

By Jeffrey Perkel, DXS tech editor

Last night on TV I caught an ad for Humira, Abbott Laboratories’ prescription medication for a series of conditions including rheumatoid arthritis, psoriatic arthritis, and Crohn’s disease.


The ad noted that, like all drugs, this medication actually has two names, its brand name (Humira), and its generic name, adalimumab. That suffix, -mab, indicates that Humira is a monoclonal antibody, a large protein normally produced by your immune system’s B cells to recognize and eliminate proteins and pathogens that are not “self.” In particular, Humira recognizes, binds, and inactivates the protein called “tumor necrosis factor,” or TNF, which is implicated in various autoimmune disorders.


There are dozens of monoclonal antibody drugs on the market now, including the breast cancer therapeutic Herceptin (trastuzumab), Remicade (infliximab) for autoimmune disorders, and Rituxan (rituximab) for non-Hodgkin lymphoma.(*) In most cases, by binding specific proteins, either in solution or on cell surfaces, these molecules either inactivate proteins (as in the case of TNF), target the cell for death, or block inappropriate cell signaling (as in Herceptin). Other antibody designs use the antibody as a “guided missile,” targeting drug or radioisotope ”warheads” to cancerous cells. 


On Sept. 30, though, a team of Japanese researchers at Chugai Pharmaceutical, reported an example of a new kind of antibody application, and it’s pretty slick.


The paper concerns a novel treatment concept for hemophila A, an X-linked recessive bleeding disorder that affects about 1 in 10,000 men. It is caused by a lack of a clotting protein called factor VIII (FVIII), and the typical treatment is “prophylactic supplementation” of the missing protein.


There are three problems with that treatment, as the paper notes. First, FVIII is expensive. It also must be administered frequently and intravenously, which is especially difficult for pediatric patients and “negatively affects both the implementation of and adherence to the supplementation routine.” But perhaps most significantly, in about 30% of cases the body recognizes the recombinant FVIII as “non-self” or “foreign,” and develops antibodies (“inhibitors”) to inactivate it, rendering the treatment ineffective.


To circumvent that problem, the Chugai team developed what is called a “bispecific antibody” to replace FVIII. So what is a bispecific antibody?


In cartoon form, antibodies resemble the letter Y, with antigen-binding regions at the tip of either branch. In a normal antibody, those two binding regions are identical, such that each antibody can bind two copies of the same protein molecule.


A standard monoclonal antibody has two binding arms, each recognizing the same antigen (protein target).
Source: Wikipedia, http://en.wikipedia.org/wiki/Antibody

A bispecific antibody, though, has two different binding domains, one for each of two proteins, such that it can effectively act as a scaffold to bring two proteins – or the cells they are attached to – together. The only bispecific currently on the market, Trion Pharma’s Removab, acts to couple immune system T cells and macrophages to tumors.

A bispecific antibody, Trion’s Removab.
Source: Wikipedia, http://en.wikipedia.org/wiki/Bispecific_monoclonal_antibody

Chugai’s scientists developed a bispecific antibody that does something different. Their antibody, called hBS23, links two other clotting factors, FIXa and FX, thereby mimicking the function and architecture of the missing FVIII without actually administering it.

FVIII activates FX in the presence of FIXa. hBS23 is a bispecific antibody that replaces FVIII.
(c) 2012 Nature Publishing Group [Nature Medicine, doi:10.1038/nm.2942]

In test tube clotting assays, hBS23 was about 14-times less catalytically efficient than FVIII itself, yet could nevertheless induce clotting, even in cases where the plasma contained inhibitors against FVIII. (Recombinant human FVIII had no effect in those latter cases.) In a non-human primate model of hemophilia A, hBS23 prevented development of anemia and reduced internal bleeding comparable to FVIII itself.


Significantly, hBS23 lasts a long time in the primate bloodstream – with an IV half-life of 14 days and comparable subcutaneously bioavailability – yet seems unlikely to elicit inhibitory antibodies of its own. That subcutaneous activity is significant, as regular subcu administration should be more easily tolerated than an IV.

Based on the these studies, and some simulations, the authors predict that ”once weekly dosing of 1 mg per kg body weight of hBS23 would show a continuous hemostatic effect in humans.”  

Of course, that’s just a prediction. The proof of the pudding is in the eating, as they say, and only time will tell how hBS23 will fare in people. But don’t look for it on pharmacy shelves any time soon. Clinical trials take time, and further optimization of the antibody design is likely required. Still, the team is obviously upbeat about their strategy’s potential:


“A long-acting, subcutaneously injectable agent that is unaffected by the presence of inhibitors could markedly reduce the burden of care for the treatment of hemophilia A.”


For more details, you can read the report here.

——————————————————
We’ve also got a partner post for you, an antibody explainer by our very own Jeanne Garbarino. Be sure to check it out! 


*Fun fact: If you’ve ever wondered about how drugs get their generic names, they are conferred by the US Adopted Names Council. The names have a kind of prefix/stem structure, linking a manufacturer-supplied but meaningless prefix (adalimu–) with a specific stem (eg, –mab) that denotes the drug class or activity. There are literally hundreds of stems, including –coxib (COX2 inhibitors), –vir (antivirals), and –stat (enzyme inhibitors); for a complete list, click here.

Backyard Brains: Affordable neuroscience

Mouse neurons. Image via Wikimedia Commons.
Originally published in PLoS Biology.

Nerve cells, called neurons, are special cells. They interact with each other and with other tissues in part by using electrical impulses. The cool thing about these cells is that thanks to their electrical signaling, we can measure when they’re sending their messages. A neuroscientist friend of mine once poetically described as “exquisite” the ability to measure the firing of a single neuron in a finch brain. There is something special about being able to observe that usually hidden process of signaling that underlies every move you make, every thought you have, and every sensation you detect.

The very word “neuroscience” sounds expensive. Measuring the signaling of nerves? That sounds pretty fancy. But with some wires and basic neuroscience tools, anyone can give it a try, measuring the nerve signaling, for example, in an insect. Which, do you think, would be the more memorable learning experience, a full-on sensory exposure to the sights and sounds of neuron signaling, or this? 


Now, a company called Backyard Brains is really bringing the neuroscience to the people. You don’t have to use their affordable kits in your backyard, but as neuroscientist and writer Mo Costandi highlights today in an interview with Tim Marzullo, co-founder of Backyard Brains, this level of technology can become available to high-school students anywhere. In the interview, Marzullo notes that the goal is to produce kits that lower the fiscal and resource requirements for making neuroscience available to people who aren’t graduate students in neuroscience.

As part of their bringing the neuroscience to the people, the Backyard Brains scientists have created the Spiker Box kit, which lets students listen to neurons firing in a de-legged cockroach. These kits are friendly with computers, iPhones, and iPads, so students can use these devices to record and listen to the Zzzzzzztt! of a firing neuron (see video below). Electrophysiology in action, made accessible. 


An even fancier introduction to science awaits. Some proteins are especially made to change their shape in response to a light trigger. Scientists have produced animals–mostly fruit flies–that make these proteins in some neurons, where they don’t usually occur. With light-reactive proteins present in the neurons, researchers can actually make the neurons fire by giving them a shot of laser light. In other words, they can make the animals move using light. Wouldn’t it be cool if classroom students could see that kind of neuroscience in action?

Backyard Brains is on the case. They’re working on a product that will allow students to use blue light emitted from an iPad to trigger light-reactive proteins in nerves that communicate with muscle cells. Because the process involves light and organisms with introduced genes, it’s called optogenetics. That sounds even more swanky than neuroscience, but Backyard Brains is working on making it accessible.

Other Backyard Brains products include RoboRoach (you’ll have to read that one for yourself), soldering kits, and the roaches themselves

Emily Willingham

The Amazing Antibody and its Therapeutic Potential


NYC Campaign to alert the authorities if you see
something  suspicious.  Antibodies are like the citizens
that tell our body that something fishy is going down.

By Biology Editor, Jeanne Garbarino

There is a campaign sponsored by NYC’s Metropolitan Transit Authority (MTA) encouraging citizens to speak up if they see any activity or persons acting in a suspicious manner.  Plastered all over buses, subways, and commuter rails are posters with the following message: If you see something, say something.  This type of imagery reminds me very much of our own biological warning system programmed to, in essence, “speak up” should a suspicious character of the microscopic kind make it’s way into our bodies.  It is through our immune response that our bodies “say something” in the event of infection. 

At the very crux of the immune response are tiny proteins called antibodies, which are basically like the citizens that report any suspicious activities.  Antibodies often travel in the blood stream, and upon crossing paths with a foreign invader (bacteria, virus, etc.), an antibody will flag it down and alert the “local authorities” of the body (aka immune cells). 

For many years, scientists have been studying antibodies and their role in the immune response, revealing many aspects surrounding their structure and function.  And through these studies, we have figured out how to use antibodies in ways that go beyond the immune system.  For instance, antibodies against human chorionic growth hormone, or hCG, are the essential ingredients in home pregnancy tests.  More recently, scientists have, in many ways, harnessed the power of antibodies for pharmaceutical uses.  A very popular example of this is the drug Remicade, which is used to treat severe autoimmune diseases like rheumatoid arthritis and Crohn’s Disease.   But, what exactly are antibodies and how do they work?

Well, I am glad I asked me that question.

As I mentioned, antibodies are proteins that we make.  Specifically, they are produced by specialized immune cells called B-cells, which are the main players during our humoral immune response.  B-cells will either secrete an antibody, which can then float around the circulatory system, or the antibody can remain attached to the outside of the B-cell.  If there is something “foreign” in our bodies, such as a virus or bacterium, antibodies will recognize and attach itself to the invader, which is scientifically referred to as an antigen.  When an antibody attaches to an antigen, it signals to our body to get rid of it.  Amazingly, each antibody can only recognize 1 antigen, which is why we need so many different types of antibodies!     

To get a better idea of how antibodies work, it is important to learn their basic structure.  Antibodies are ‘Y’ shaped proteins, and have both constant and variable regions.  The constant region is the same among all antibodies within a specific class (there are several different classes), where as the variable region is the portion of the antibody that is designed to recognize a specific antigen.      

To better explain this, consider the antibody to be a lacrosse stick.  The “stick” part is the constant region, and the mesh part is the variable region.  Now consider the lacrosse ball to be the antigen (i.e. bacterium or virus).  Only the lacrosse ball that is a triangle can fit into the lacrosse stick with the triangle-shaped mesh pocket.  The same is true for the circle.  And so on.  Once the ball fits into the mesh, meaning, once the antibody binds the antigen, a cascade of events is set off, essentially sounding the alarm.  Under normal, healthy circumstances, we take care of the antigen and the infectious agent is removed. (Note: there are different classes of antibodies and each class has it’s own “stick” part.)

A basic analology for how antibodies work.
Building off our understanding of how antibodies work, scientists have been able to develop monoclonal antibody therapy, which is the use of specific antibodies to stimulate an immune response against a disease.  For instance, we now use monoclonal antibody therapy to combat a variety of cancers by injecting cancer patients with antibodies designed to recognize specific components on the surface of tumor cells.  This helps signal to the body that it should turn on the immune response and get rid of the tumor cells. 

The list of conditions where monoclonal antibody is a potential therapy is growing, and includes a variety of autoimmune diseases and cancers, post-organ transplant therapy, human respiratory syncytial virus (RSV) infections in children, and most recently hemophilia A.  Also being explored is the use of monoclonal antibody therapy for addiction, which could essentially revolutionize how we can help people kick extremely difficult habits (i.e. cocaine or methamphetamine).

Despite the thousands of tedious and repetitive assays I’ve done using antibodies in my own laboratory, I know that I can never lose sight of how amazing these little proteins are. 

———————————————-
This post is a mental appetizer for another post on monoclonal antibodies by DXS tech editor, Jeffrey Perkel. His post specifically discusses the potential use of monoclonal antibody to treat the X-linked blood disorder, hemophilia A.  Read about it here.     

Towards better drug development, fewer side effects?

You may have had the experience: A medication you and a friend both take causes terrible side effects in you, but your friend experiences none. (The running joke in our house is, if a drug has a side-effect, we’ve had it.) How does that happen, and why would a drug that’s meant to, say, stabilize insulin levels, produce terrible gastrointestinal side effects, too? A combination of techy-tech scientific approaches might help answer those questions for you — and lead to some solutions.

It’s no secret I love lab technology. I’m a technophile. A geek. I call my web site “Biotechnically Speaking.” So when I saw this paper in the September issue of Nature Biotechnology, well, I just had to write about it.

The paper is entitled, “Multiplexed mass cytometry profiling of cellular states perturbed by small-molecule regulators.” If you read that and your eyes glazed over, don’t worry –- the article is way more interesting than its title. 

Those trees on the right are called SPADE trees. They map cellular responses to different  stimuli in a collection of human blood cells. Credit: (c) 2012 Nature America [Nat Biotechnol, 30:858--67, 2012]
Here’s the basic idea: The current methods drug developers use to screen potential drug compounds –- typically a blend of high-throughput imaging and biochemical assays – aren’t perfect. If they were, drugs wouldn’t fail late in development. Stanford immunologist Garry Nolan and his team, led by postdoc Bernd Bodenmiller (who now runs his own lab in Zurich), figured part of that problem stems from the fact that most early drug testing is done on immortalized cell lines, rather than “normal” human cells. Furthermore, the tests that are run on those cells aren’t as comprehensive as they could be, meaning potential collateral effects of the compounds might be missed. Nolan wanted to show that flow cytometry, a cell-analysis technique frequently used in immunology labs, can help reduce that failure rate by measuring drug impacts more holistically. 


Nolan is a flow cytometry master. As he told me in 2010, he’s been using the technique for more than three decades, and even used a machine now housed in the Smithsonian.


In flow cytometry, researchers treat cells with reagents called antibodies, which are immune system proteins that recognize and bind to specific proteins on cell surfaces. Each type of cell has a unique collection of these proteins, and by studying those collections, it is possible to differentiate and count the different populations.


Suppose researchers wanted to know how many T cells of a specific type were present in a patient’s blood. They might treat those cells with antibodies that recognize a protein known as CD3 to pick those out. By adding additional antibodies, they can then select different T-cell subpopulations, such as CD4-positive helper T cells and CD8-positive cytotoxic T cells, both of which help you mount immune responses.


Cells of the immune system
Source: http://stemcells.nih.gov/info/scireport/chapter6.asp
In a basic flow cytometry experiment, each antibody is labeled with a unique fluorescent dye –- the antibody targeting CD3 might be red, say, and the CD4 antibody, green. The cells stream past a laser, one by one. The laser (or lasers –- there can be as many as seven) excites the dye molecules decorating the cell surface, causing them to fluoresce. Detectors capture that light and give a count of how many total cells were measured and the types of cells. The result is a kind of catalog of the cell population. For immune cells, for example, that could be the number of T cells, B cells (which, among other things, help you “remember” previous invaders), and macrophages (the big cells that chomp up invaders and infected cells). By comparing the cellular catalogs that result under different conditions, researchers gain insight into development, disease, and the impact of drugs, among other things.


But here’s the problem: Fluorescent dyes aren’t lasers, producing light of exactly one particular color. They absorb and emit light over a range of colors, called a spectrum. And those spectra can overlap, such that when a researcher thinks she’s counting CD4 T cells, she may actually be counting some macrophages. That overlap leads to all sorts of experimental optimization issues. An exceptionally talented flow cytometrist can assemble panels of perhaps 12 or so dyes, but it might take months to get everything just right.


That’s where the mass cytometry comes in. Commercialized by DVS Sciences, mass cytometry is essentially the love-chid of flow cytometry and mass spectrometry, combining the one-cell-at-a-time analysis of the former with the atomic precision of the latter. Mass spectrometry identifies molecules based on the ratio of their mass to their charge. In DVS’ CyTOF mass cytometer, a flowing stream of cells is analyzed not by shining a laser on them, but by nuking them in superhot plasma. The nuking reduces the cell to its atomic components, which the CyTOF then measures.

Specifically, the CyTOF looks for heavy atoms called lanthanides, elements found in the first of the two bottom rows of the periodic table, like gadolinium, neodymium, and europium. These elements never naturally occur in biological systems and so make useful cellular labels. More to the point, the mass spectrometer is specific enough that these signals basically don’t overlap. The instrument will never confuse gadolinium for neodymium, for instance. Researchers simply tag their antibodies with lanthanides rather than fluorophores, and voila! Instant antibody panel, no (or little) optimization required.

Periodic Table of Cupcakes, with lanthanides in hot pink frosting.
Source: http://www.buzzfeed.com/jpmoore/the-periodic-table-of-cupcakes
Now back to the paper. Nolan (who sits on DVS Sciences’ Scientific Advisory Board) and Bodenmiller wanted to see if mass cytometry could provide the sort of high-density, high-throughput cellular profiling that is required for drug development. The team took blood cells from eight donors, treated them with more than two dozen different drugs over a range of concentrations, added a dozen stimuli to which blood cells can be exposed in the body, and essentially asked, for each of the pathways we want to study, in each kind of cell in these patients’ blood, what did the drug do?


To figure that out, they used a panel of 31 lanthanides –- 10 to sort out the cell types they were looking at in each sample, 14 to monitor cellular signaling pathways, and 7 to identify each sample.


I love that last part, about identifying the samples. The numbers in this experiment are kind of staggering: 12 stimuli x 8 doses x 14 cell types x 14 intracellular markers per drug, times 27 drugs, is more than half-a-million pieces of data. To make life easier on themselves, the researchers pooled samples 96 at a time in individual tubes, adding a “barcode” to uniquely identify each one. That barcode (called a “mass-tag cellular barcode,” or MCB) is essentially a 7-bit binary number made of lanthanides rather than ones and zeroes: one sample would have none of the 7 reserved markers (0000000); one sample would have one marker (0000001); another would have another (0000010); and so on. Seven lanthanides produce 128 possible combinations, so it’s no sweat to pool 96. They simply mix those samples in a single tube and let the computer sort everything out later.


This graphic summarizes a boatload of data on cell signaling pathways impacted by different drugs.
Credit: (c) 2012 Nature America [Nat Biotechnol, 30:858--67, 2012]
When all was said and done, the team was able to draw some conclusions about drug specificity, person-to-person variation, cell signaling, and more. Basically, and not surprisingly, some of the drugs they looked at are less specific than originally thought -– that is, they affect their intended targets, but other pathways as well. That goes a long way towards explaining side effects. But more to the point, they proved that their approach may be used to drive drug-screening experiments.


And I get to write about it.