Today’s guest post (originally posted here) is from Katie Hinde, an Assistant Professor in Human Evolutionary Biology at Harvard University. Katie studies how variation in mother’s milk influences infant development in rhesus monkeys. You can learn more about Katie and mammalian lactation by visiting her blog, Mammals Suck… Milk!. Follow Katie on Twitter @Mammals_Suck.
Milk is everywhere. From the dairy aisle at the grocery store to the explosive cover of the Mother’s Day issue of Time magazine, the ubiquity of milk makes it easy to take for granted. But surprisingly, milk synthesis is evolutionarily older than mammals. Milk is even older than dinosaurs. Moreover, milk contains constituents that infants don’t digest, namely oligosaccharides, which are the preferred diet of the neonate’s intestinal bacteria (nom nom nom!) And milk doesn’t just feed the infant, and the infant’s microbiome; the symbiotic bacteria are IN mother’s milk.
Evolutionary Origins of Lactation
The fossil record, unfortunately, leaves little direct evidence of the soft-tissue structures that first secreted milk. Despite this, paleontologists can scrutinize morphological features of fossils, such as the presence or absence of milk teeth (diphyodonty), to infer clues about the emergence of “milk.” Genome-wide surveys of the expression and function of mammary genes across divergent taxa, and experimental evo-devo manipulations of particular genes also yield critical insights. As scientists begin to integrate information from complementary approaches, a clearer understanding of the evolution of lactation emerges.
In his recent paper, leading lactation theorist Dr. Olav Oftedal discusses the ancient origins of milk secretion (2012). He contends the first milk secretions originated ~310 million years ago (MYA) in synapsids, a lineage ancestral to mammals and contemporaries with sauropsids, the ancestors of reptiles, birds, and dinosaurs. Synapsids and sauropsids produced eggs with multiple membrane layers, known as amniote eggs. Such eggs could be laid on land. However, synapsid eggs had permeable, parchment-like shells and were vulnerable to water loss. Burying these eggs in damp soil or sand near water resources- like sea turtles do- wasn’t an option, posits Oftedal. The buried temperatures would have likely been too cold for the higher metabolism of synapsids. But incubating eggs in a nest would have evaporated water from the egg. The synapsid egg was proverbially between a rock and a hard place: too warm to bury, too permeable to incubate.
Ophiacodon by Dmitri Bogdanov
Luckily for us, a mutation gave rise to secretions from glandular skin on the belly of the synapsid parent. This mechanism replenished water lost during incubation, allowing synapsids to lay eggs in a variety of terrestrial environments. As other mutations randomly arose and were favored by selection, milk composition became increasingly complex, incorporating nutritive, protective, and hormonal factors (Oftedal 2012). Some of these milk constituents are shunted into milk from maternal blood, some- although also present in the maternal blood stream- are regulated locally in the mammary gland, and some very special constituents are unique to milk. Lactose and oligosaccharides (a sugar with lactose at the reducing end) are two constituents unique to mammalian milk, but are interestingly divergent among mammals living today.
Illustration by Carl Buell
Mammalian and Primate Divergences: Milk Composition
Among all mammals studied to date, lactose and oligosaccharides are the primary sugars in milk. Lactose is synthesized in mammary glands only. Urashima and colleagues explain that lactose synthesis is contingent on the mammalian-specific protein alpha-lactalbumin (2012). Alpha-lactalbumin is very similar in amino-acid structure to C-type lysozyme, a more ancient protein found throughout vertebrates and insects. C-type lysozyme acts as an anti-bacterial agent. Oligosaccharides are predominant in the milks of marsupials and egg-laying monotremes (i.e. the platypus), but lactose is the most prevalent sugar in the milk of most placental (aka eutherian) mammals. Interestingly, the oligosaccharides in the milk of placental mammals are most similar to the oligosaccharides in the milk of monotremes. Unique oligosaccharides in marsupial milk emerged after the divergence of placental mammals.
Marsupial and monotreme young seemingly digest oligosaccharides. Among placental mammals, however, young do not have the requisite enzymes in their stomach and small intestine to utilize oligosaccharides themselves. Why do eutherian mothers synthesize oligosaccharides in milk, if infants don’t digest them?
In May, Anna Petherick’s post “Multi-tasking Milk Oligosaccharides” revealed that oligosaccharides serve a number of critical roles for supporting the healthy colonization and maintenance of the infant’s intestinal microbiome. Beneficial bacterial symbionts contribute to the digestion of nutrients from our food. Just as importantly, they are an essential component of the immune system, defending their host against many ingested pathogens. The structures of milk oligosaccharides have been described for a number of primates, including humans, and data are now available from all major primate clades; strepsirrhines (i.e. lemurs), New World monkey (i.e. capuchin), Old World monkey (i.e. rhesus), and apes (i.e. chimpanzee).
Among all non-human primates studied to date, Type II oligosaccharides are most prevalent (Type II oligosaccharides contain lacto-N-biose I). Type I oligosaccharides (containing N-acetyllactosamine) are absent, or in much lower concentrations than Type II(Taufik et al. 2012).
In human milk, there is a much greater diversity and higher abundance of milk oligosaccharides than found in the milk of other primates. Most primate taxa have between 5-30 milk oligosaccharides; humans have ~200. Even more astonishingly, humans predominantly produce Type I oligosaccharides, the preferred food of the most prevalent bacterium in the healthy human infant gut- Bifidobacteria (Urashima et al 2012, Taufik et al. 2012).
Human infants have bigger brains and an earlier age at weaning than do our closest ape relatives. Many anthropologists have hypothesized that constituents in mother’s milk, such as higher fat concentrations or unique fatty acids, underlie these differences in human development. But only oligosaccharides, a constituent that the human infant does not itself utilize, are demonstrably derived from our primate relatives (Hinde and Milligan 2011). At some point in human evolution there must have been strong selective pressure to optimize the symbiotic relationship between the infant microbiome and the milk mothers synthesize to support it. The human and Bifidobacteria genomes show signatures of co-evolution, but the selective pressures and their timing remain to be understood.
Vertical Transmission of Bacteria via Milk
In the womb, the infant is largely protected from maternal bacteria due to the placental barrier. But upon birth, the infant is confronted by a teeming microbial milieu that is both a challenge and an opportunity. The first inoculation of commensal bacteria occurs during delivery as the infant passes through the birth canal and is exposed to a broad array of maternal microbes. Infants born via C-section are instead, and unfortunately, colonized by the microbes “running around” the hospital. But exposure to the mother’s microbiome continues long after birth. Evidence for vertical transmission of maternal bacteria via milk has been shown in rodents, monkeys(Jin et al. 2011), humans(Martin et al. 2012), and… insects.
A number of insects have evolved the ability to rely on nutritionally incomplete food sources. They are able to do so because bacteria that live inside their cells provide what the food does not. These bacteria are known as endosymbionts and the specialized cells the host provides for them to live in are called bacteriocytes. For example, the tsetse fly has a bacterium, Wigglesworthia glossinidia,* that provides B vitamins not available from blood meals. Um, if you are squeamish, don’t read the previous sentence.
*I submit the tsetse fly and its bacterial symbiont (Wigglesworthia glossinidia) for consideration as the number one mutualism in which the common name of the host and the Latin name of the bacteria are awesome to say out loud! Bring on your challenger teams.
Hosokawa and colleagues recently revealed the Russian nesting dolls that are bats (Miniopterus fuliginosus), bat flies (Nycteribiidae), and endosymbiotic bacteria (proposed name Aschnera chenzii)(2012). Bat flies are the obligate ectoparasites of bats (Peterson et al. 2007). They feed on the blood of their bat hosts, and for nearly their entire lifespan, bat flies live in the fur of their bat hosts. Females briefly leave their host to deposit pupae on stationary surfaces within the bat roost.
Bat flies are even more crazy amazing because they have a uterus and provide MILK internally through the uterus to larva! Male and female bat flies have endosymbiotic bacteria living in bacteriocytes along the sides of their abdominal segments (revealed by 16S rRNA). Additionally, females host bacteria inside the milk gland tubules, “indicating the presence of endosymbiont cells in milk gland secretion”.
The authors are not yet certain of the specific nutritional role that these bacterial endosymbionts play in the bat fly host. The bacteria may provide B vitamins, as other bacterial symbionts of blood-consuming insects are known to do. My main question is what is the exact role of the bacteria in the milk gland tubules? Are they there to add nutritional value to the milk for the larva, to stowaway in milk for vertical transmission to larva, or both?
The studies described above represent new frontiers in lactation research. The capacity to secrete “milk” has been evolving since before the age of dinosaurs, but we still know relatively little about the diversity of milks produced by mammals today. Even less understood are the consequences and functions of various milk constituents in the developing neonate. Despite the many unknowns, it is increasingly evident that mother’s milk cultivates the infant’s gut bacterial communities in fascinating ways. A microbiome milk-ultivation, if you will, that has far reaching implications for human development, nutrition, and health. Integrating an evolutionary perspective into these newly discovered complexities of milk dynamics allows us to reimagine the world of “dairy” science.
Hosokawa et al. 2012. Reductive genome evolution, host-symbiont co-speciation, and uterine transmission of endosymbiotic bacteria in bat flies. ISME Journal. 6: 577-587
Jin et al. 2011. Species diversity and abundance of lactic acid bacteria in the milk of rhesus monkeys (Macaca mulatta). J Med Primatol. 40: 52-58
Martin et al. 2012. Sharing of Bacterial Strains Between Breast Milk and Infant Feces. J Hum Lact. 28: 36-44
Oftedal 2012. The evolution of milk secretion and its ancient origins. Animal. 6: 355-368.
Peterson et al. 2007. The phylogeny and evolution of hostchoice in the Hippoboscoidea(Diptera) as reconstructed using fourmolecularmarkers. Mol Phylogenet Evol. 45 :111-22
Taufik et al. 2012. Structural characterization of neutral and acidic oligosaccharides in the milks of strepsirrhine primates: greater galago, aye-aye, Coquerel’s sifaka, and mongoose lemur. Glycoconj J. 29: 119-134.
Urashima, Fukuda, & Messer. 2012. Evolution of milk oligosaccharides and lactose: a hypothesis. Animal. 6: 369-374.
The Sun will rise on the morning of December 22 and find most of humanity still living. I can say that with a great deal of confidence, though my scientist’s brain tells me I should say the world “probably” won’t end tomorrow. After all, there’s a tiny chance, a minuscule probability…but it’s so small we don’t have to worry about it, just like we don’t have to worry about being struck down by a meteorite while walking down the street. It could happen, but it almost certainly won’t.
My confidence comes from science. I know it sounds hokey, but it’s true. There’s no scientific reason—absolutely none—to think the world will end tomorrow. Yes, the world will end one day, and Earth has experienced some serious cataclysms in the past that wiped out a significant amount of life, but none of those things are going to happen tomorrow. (I’ll come back to those points in a bit.) We’re very good at science, after centuries of work, and the kinds of violent events that could seriously threaten us won’t take us by surprise.
Why the World Won’t End
So where does this stuff come from? Whose idea was it that “the end of the world will be on December 21, 2012”? The culprit, according to those who buy into the idea, is that the end of the world was predicted by the Mayas in their mythology, and codified in their calendar. However, it’s pretty safe to say that the Mayas didn’t really predict the end of the world, even though I don’t know much about the great Mayan civilization that existed on the Yucatan peninsula in what is now Mexico from antiquity until the Spanish conquest.
See this calendar? It’s being touted as a Mayan
calendar in articles about the “end of the world”,
but it ain’t Mayan. It’s an Aztec calendar. Please
don’t mix up civilizations.
The Mayas were the only people in the Americas known to have developed a complete written language, which is part of how we know a lot about them despite their destruction by the hand of European invaders. In particular, we know about their calendar, and the divisions they used. We use what’s called a decimal system for numbers, based on the 10 fingers of our hands. That’s why we break things up into decades (ten years) and centuries (ten decades), as well as a millennium (ten centuries). The Mayas liked different divisions of time: their b’ak’tun is approximately 394 years, and they placed a certain significance on a cycle of 13 b’ak’tuns. (I suspect the Klingon language in Star Trek borrowed some of its vocabulary from ancient Mayan.)
In the “Long Count,” one version of the Mayan calendar known to us, the present world came to be on August 11, 3114 BC. That world will end at the close of the 13th b’ak’tun from that creation day, which happens to be December 21, 2012. However, there’s good reason to think that the Mayas didn’t believe this would be the end of all things: other calendars exist that refer to an even longer span of years, stretching thousands of years into the future!
Even more importantly, though: the Mayan cosmology (their view of the universe) was cyclic, as in many other religions. This world was not the first in this cosmology, and it won’t be the last. In such a view, the true universe is eternal, and the cycles of time are a kind of divine rebooting, which don’t really end anything. The end of the 13th b’ak’tun might be a transformative event in the Maya cosmology, but it’s not the end of the world.
Frankly, I’m not sure why we should care even if the Mayas did believe this was the end of the world. As I said previously, there’s no scientific reason to think the world will end tomorrow. But maybe you might think there’s a non-scientific reason—divine intervention to wipe out the Earth, perhaps. However, I’d venture to guess that most of us don’t adhere to the Mayan religion. Their gods are not the gods most people worship. The prophesied arrival on Earth of Bolon Yookte’ K’Uh, the Nine-Footed God is not something central to my belief system, and probably not yours either.
In fact, millennial thinking is far more a Christian thing than it is a Mayan thing—or frankly most other religions. When people talk about the supposed end of the world tomorrow, they use the Christian terminology: Armageddon (referring to Megiddo, a place in northern Israel, named in the Book of Revelation as the site of the last battle) or the apocalypse (literally the “uncovering”, when all that was hidden becomes revealed). These weren’t concepts in the Mayan religion, and nothing in the Christian religion says the world will end on December 21, 2012.
The World Will End…Eventually
Some say the world will end in fire,
Some say in ice.
From what I’ve tasted of desire
I hold with those who favor fire.
Science tells us the world won’t end tomorrow. It also tells us the Mayan cosmology is wrong: time doesn’t go in cycles forever. Earth began 4.5 billion years ago, and will end in about 5 billion years more—at least as a livable world, which is what counts for us. In between its beginning and end, it is defined by cycles: the length of rotation (days) and the time to travel around the Sun (years), with its associated seasons. Other cycles are pretty arbitrary: centuries and b’ak’tuns don’t have any particular significance in terms of astronomical events.
The end of the world as we know it will happen in about 5 billion years, when the Sun ceases fusing hydrogen into helium in its core. When that happens, the Sun will grow into a red giant star, swallowing up Mercury and Venus. Earth probably won’t be devoured, but with the Sun’s surface so much closer, things will become distinctly unpleasant. It’s unlikely the atmosphere or oceans could survive, meaning the end of most life. (Some microbes could probably continue to live underground. That kind of thing is a story for another day.) However, 5 billion years is a long time from now.
Could another cataclysm overtake us before that time? Yes. As you may know, about 65 million years ago, a large asteroid smashed into Earth, an event that at least helped end the reign of dinosaurs, and ushering the extinction of many other species.
Unfortunately, we can’t rule out the possibility that could happen again. There are enough asteroids and comets in our Solar System that could eventually cross orbital paths with Earth; if a large specimen collided with us, it would be devastating.
However, we’re talking about tomorrow. No asteroid will strike Earth on December 21: astronomers keep careful track of everything near our planet, and nothing we know of is on a collision course with Earth for the near future. Asteroids and comets are really the only things we have to worry about doing serious damage for life on Earth, but you can sleep easy tonight and tomorrow night: we’re safe.
If you could somehow see the planets during
daylight hours, here’s how they would
appear tomorrow at noon. There’s no
alignment. (You can see this for yourself
using the free planetarium program Stellarium.)
Some people have talked about fairly far-fetched ideas: alignments of planets, or lining up Earth, the Sun, and the center of the galaxy. The planets of the Solar System aren’t aligned tomorrow—the image shows where several of them are in relation to the Sun at noon. Jupiter isn’t anywhere close to the planets you see. You’d need a pretty strong imagination to say they’re lined up in any way: while they do lie along a line, that’s the way they always are, since they all orbit the Sun more or less in the same plane. Alignment with the galactic center is even more simple to dismiss: about once a year, the Sun appears aligned with the galactic center in the sky. And nothing happens.
Another explanation I’ve seen involves a mysterious planet called “Nibiru” or “Planet X,” which either will collide with Earth or otherwise generate a baleful influence. Phil Plait, the Bad Astronomer, has a lot about the Nibiru nonsense, so I won’t repeat what he says. Suffice to say Nibiru doesn’t exist: there’s no evidence for it, and (surprise!) it’s not anything that came from Mayan mythology to begin with, so there’s no reason to associate it with a December 21 apocalypse.
A Positive Conclusion
Science, I think, is reassuring in the midst of panic. Why people like to scare themselves and others with misguided ideas of the world’s end, I am not qualified to say. I don’t know how many people are convinced the world will end tomorrow, compared with the number of people who are either wholly skeptical or those who might be a little worried. However, let me reassure you again: the world will not end tomorrow. We can take comfort in the knowledge that December 22 will come, 2012 will end, and a new year—a new cycle—will begin. Any remaking of the world is up to us, so rather than worrying about imaginary apocalypses, let’s commit to improving the lives of those who live on our magnificent planet.
We started with a welcome and gratitude to the organizers and attendees and our tagline “Science, I am Just That Into You.” We were selected to appear with a lot of fantastic programming over the weekend.
We introduced our 3 panelists:
Adrienne Roehrich, your panel moderator and the chemistry editor at Double X Science
All 3 have PhDs in their respective fields – Emily is a developmental biologist, Ray is an analytical chemist, and Adrienne is a physical chemist. Emily and Ray are prolific writers. You can find their articles all over the internet and in print. Ray is a staff member for GeekGirlCon and Adrienne is a Special Agent volunteer. All 3 are active on social media and welcome live-tweeting and suggest the #DXS hashtag along with the #GGC12. And you can use the @DoubleXSci for the panel.
Then a poll of the room to see who had heard of the site. Only a few attendees were already familiar with the site, so we told them that DoubleXScience covers a lot of current science. For example on (the previous) Monday, Emily posted about the Mars Curiosity Rover touchdown. In July, the physics editor covered the Higgs particle announcement. We also cover timeless, yet updated science, such as pregnancy and other health issues that we editors perceive to be of interest to ourselves and our readers.
It’s hard to discuss what Double X Science is without discussing who it is.
After a review of who all the people on that particular slide are and what they have to do with Double X Science, three questions were asked by the moderator:
In November of 2011, Emily founded Double X Science, Emily what was your motivation in founding the site and what was then and is now your vision for it?
As mentioned, we have content from editors, other sites and contributors. Ray was the first contributor to the site – what attracted you to Double X Science?
What do the attendees want to know?
And then our discussion really got started. Thankfully, we had 3 great tweeters attending, so I can just point you along their tweets:
[&amp;amp;amp;amp;amp;amp;lt;a href=”http://storify.com/fiainros/double-x-science-panel-at-geekgirlcon-2012″ target=”_blank”&amp;amp;amp;amp;amp;amp;gt;View the story “Double X Science panel at GeekGirlCon 2012” on Storify&amp;amp;amp;amp;amp;amp;lt;/a&amp;amp;amp;amp;amp;amp;gt;]
Photo by Adrienne Roehrich and used with permission.
Tomorrow, I head for North Carolina to attend Science Online 2012. I attended last year as an an information sponge and observer who knew no one and experienced some highlights and lowlights. This year, I’m attending as a participant and as a moderator of two sessions. The first session, on Thursday afternoon, is with Deborah Blum, and we’ll be leading a discussion about how and when to include basic science in health and medical writing without distracting the reader. The second session I’m moderating is with Maia Szalavitz, and we’ll be talking about whether or not it’s possible to write in health and medicine as an advocate and still be even-handed. Session descriptions are below, as are the topics that we’ll be tossing around for discussion.
Thursday, 2:45 p.m.: The basic science behind the medical research: Where to find it, how and when to use it.
Sometimes, a medical story makes no sense without the context of the basic science–the molecules, cells, and processes that led to the medical results. At other times, inclusion of the basic science can simply enhance the story. How can science writers, especially without specific training in science, find, understand, and explain that context? As important, when should they use it? The answers to the second question can depend on publishing context, intent, and word count. This session will involve moderators with experience incorporating basic science information into medically based pieces with their insights into the whens and whys of using it. The session will also include specific examples of what the moderators and audience have found works and doesn’t work from their own writing.
Deborah and I have been talking about some issues we’d like to raise for discussion. The possibilities are expansive. Some highlights:
Scientific explanation (and understanding) is the foundation for the best science writing. In fact, if the writer doesn’t understand the science, he or she may miss the most important part of the story.But we worry that pausing to explain can slow a story down or disrupt the flow. In print, writers deal with this by condensing and simplifying explanations and also by trying to make them lively and vivid, such as by use of analogy. But online, we use hotlinks as often if not more often for the same purpose.
Reaching a balance between links and prose can be a difficult task. Another possible pitfall is writing an explanation that’s more about teaching ourselves than it is about informing a reader sufficiently for story comprehension. How many writers run into that problem?
On-line the temptation is to do the barest explanation and the link to the fuller account, but that approach has pros and cons. More information is available to the reader and the sourcing is transparent. But how often do readers follow those links – and how often do they return? Issues with links include that they are not necessarily evergreen or can lose reader (can be exit portal), or that the reader may not use them at all, thus losing some of the story’s relevant information.
A reader may actually learn more from a print story where there are no built-in escape clauses. So how does the on-line science writer best construct a story that illuminates the subject? Are readers learning as much for our work as they do from a print version? (And there’s that age-old question of, Are we here to teach or to inform?)
Are we diminishing our own craft if we use links to let others tell the story for us? If we simply link out rather than working to supply an accessible explanation, negatives could include not pushing ourselves as writers and not expanding our own knowledge base, both essential to our craft.
How much do we actually owe our readers here? How much work should we expect them to do?
What are some ways to address issues of flow, balance, clarity? One possibility is, of course, expert quotes. Twitter is buzzing with scientists, many of whom likely would be pleased to explain a concept or brainstorm about it. (I’ve helped people who have “crowdsourced” in this way for a story, just providing an understandable, basic explanation for something complex).
Deborah and I are considering a challenge for the audience with a couple of basic science descriptives, to define them for a non-expert audience without using typical hackneyed phrases. Ideas for this challenge are welcome.
We also will feature some examples from our own work in which we think we bollixed up something in trying to explain it (overexplained or did it more for our own understanding than the reader’s) and examples from our own or others’ work of good accessible writing explaining a basic concept. We particularly want to show some explanations of quite complicated concepts–some that worked, some that didn’t. Suggestions for these are welcome!
Finally, when we do use links in our online writing– what consitutes a quality link?
Saturday, 10:45 a.m.: Advocacy in medical blogging/communication. Can you be an advocate and still be fair?
There is already a session on how reporting facts on controversial topics can lead to accusations of advocacy. But what if you *are* an avowed advocate in a medical context, either as a person with a specific condition (autism, multiple sclerosis, cancer, heart disease) or an ally? How can you, as a self-advocate or ally of an advocate, still retain credibility–and for what audience?
The genesis of this session was my experience in the autism community. I’m an advocate of neurodiversity, the basic premise of which is that people of all neurologies have potential that should be sought, emphasized, and nurtured over their disabilities. Maia, the co-moderator of our session, has her own story of advocacy to tell as a writer about pain, pain medication, mental health, and addiction.
Either of these topics is controversial, and when you’ve put yourself forward as an advocate, how can you also present as a trustworthy voice on the subject? Maia and I will lead a discussion that will hit, among other things, on the following topics that we hope will lead to a vigorous exchange and input from people whose advocacy is in other arenas:
Can stating facts or scientific findings themselves lead to a perception of advocacy? Maia’s experience is, for example, about observing that heroin doesn’t addict everyone who tries it. My example is about noting the facts from research studies that have identified no autism-vaccine link.
Any time either of us talks about vaccines or medications for mental health, we’ve run into accusations of being a “Big Pharma tool” or with worse terminology. What response do such accusations require, and what constitutes a conflict of interest here? What is the level of corruption of data that’s linked to pharma involvement? If they are the only possible source of funding for particular studies…do we ignore their data completely?
We both agree that having an advocacy bias seems to strengthen our skeptical thinking skills, that it leads us to dig into data with an attitude of looking for facts and going beyond the conventional wisdom in a way that someone less invested might not do. Would audience members agree?
In keeping with that, are advocates in fact in some ways more willing to acknowledge complexities and grey areas rather than reducing every situation to black and white?
We also want to talk about how the passion of advocacy can lead to a level of expertise that may not be as easily obtained without some bias.
That said, another issue that then arises is, How do you grapple with confirmation bias? We argue that you have to consciously be ready to shift angle and conclusions when new information drives you that way–just as a scientist should.
One issue that has come to the forefront lately is the idea of false equivalence in reporting. Does being an advocate lead to less introduction of false equivalence?
We argue that you may not be objective but that you can still be fair–and welcome discussion about that assertion.
And as Deborah and I are doing, we’re planning a couple of challenge questions for discussants to get things moving and to produce some examples of our own when we let our bias interfere too much and when we felt that we remained fair.
————————————————— The entire conference agenda looks so delicious, so full of moderators and session leaders whom I admire, people I know will have insights and new viewpoints for me. The sheer expanse of choice has left me as-yet unable to select for myself which sessions I will attend. If you’re in the planning stages and see something you like for either of these sessions, please join us and…bring your discussion ideas!
The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.
The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.
Big Molecules with Small Building Blocks
The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.
We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.
You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.
When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.
Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.
The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.
Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.
On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.
The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!
If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.
The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?
If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.
In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.
Sugar and Fuel
A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.
Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.
Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.
Polysaccharides: Fuel and Form
Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.
Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.
Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.
Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.
The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.
Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.
The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.
That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.
These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.
Lipids: The Fatty Trifecta
Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.
Fats: the Good, the Bad, the Neutral
Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?
Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows. Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.
Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.
Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.
Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.
The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.
You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.
In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.
A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.
Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.
Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.
Phospholipids: An Abundant Fat
You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.
Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.
There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.
Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.
The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.
Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.
Steroids: Here to Pump You Up?
Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.
But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.
Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.
Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.
As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.
Levels of Structure
Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.
For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.
This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.
Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.
The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.
In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.
A Plethora of Purposes
What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.
As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.
How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.
Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.
DNA vs. RNA: A Matter of Structure
DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.
So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.
RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.
DNA vs. RNA: Function Wars
An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.
These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.
RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.
How often have you wished for an extra hour or extra day to get everything you need done? At the Autism Science Foundation (ASF), we want to make the most of this special leap day by using it to help autism science leap forward.
Thanks to your support, for the last two years we have provided funding for autism stakeholders (parents, individuals with autism, teachers, students, etc) to attend the International Meeting for Autism Research (IMFAR). All donations made today, February 29, 2012, will go directly to our IMFAR Travel Grants program, helping us provide more scholarships to IMFAR 2012 in Toronto where they will share their real world autism experience with scientists. These stakeholders will then bring the latest autism science back into our communities helping the science take a giant leap forward.
After attending IMFAR, past grant recipients have: – Organized a five day autism science seminar at Barnard College – Presented critical autism research information to nurses in Philadelphia – Produced multiple blog posts that reached thousands of readers around the world – Organized an autism awareness club and speaker series at Yale College
And thanks to a generous donor, all donations made today (February 29, 2012) will be matched dollar for dollar for an extra big leap.
The Autism Science Foundation was founded in 2009 as a nonprofit corporation organized for charitable and educational purposes, and exempt from taxation under section 501(c)(3) of the IRS code.
The Autism Science Foundation’s mission is to support autism research by providing funding and other assistance to scientists and organizations conducting, facilitating, publicizing and disseminating autism research. The organization also provides information about autism to the general public and serves to increase awareness of autism spectrum disorders and the needs of individuals and families affected by autism.
You may have had the experience: A medication you and a friend both take causes terrible side effects in you, but your friend experiences none. (The running joke in our house is, if a drug has a side-effect, we’ve had it.) How does that happen, and why would a drug that’s meant to, say, stabilize insulin levels, produce terrible gastrointestinal side effects, too? A combination of techy-tech scientific approaches might help answer those questions for you — and lead to some solutions.
It’s no secret I love lab technology. I’m a technophile. A geek. I call my web site “Biotechnically Speaking.” So when I saw this paper in the September issue of Nature Biotechnology, well, I just had to write about it.
The paper is entitled, “Multiplexed mass cytometry profiling of cellular states perturbed by small-molecule regulators.” If you read that and your eyes glazed over, don’t worry –- the article is way more interesting than its title.
Those trees on the right are called SPADE trees. They map cellular responses to different stimuli in a collection of human blood cells. Credit: (c) 2012 Nature America [Nat Biotechnol, 30:858–67, 2012]
Here’s the basic idea: The current methods drug developers use to screen potential drug compounds –- typically a blend of high-throughput imaging and biochemical assays – aren’t perfect. If they were, drugs wouldn’t fail late in development. Stanford immunologist Garry Nolan and his team, led by postdoc Bernd Bodenmiller (who now runs his own lab in Zurich), figured part of that problem stems from the fact that most early drug testing is done on immortalized cell lines, rather than “normal” human cells. Furthermore, the tests that are run on those cells aren’t as comprehensive as they could be, meaning potential collateral effects of the compounds might be missed. Nolan wanted to show that flow cytometry, a cell-analysis technique frequently used in immunology labs, can help reduce that failure rate by measuring drug impacts more holistically.
Nolan is a flow cytometry master. As he told me in 2010, he’s been using the technique for more than three decades, and even used a machine now housed in the Smithsonian.
In flow cytometry, researchers treat cells with reagents called antibodies, which are immune system proteins that recognize and bind to specific proteins on cell surfaces. Each type of cell has a unique collection of these proteins, and by studying those collections, it is possible to differentiate and count the different populations.
Suppose researchers wanted to know how many T cells of a specific type were present in a patient’s blood. They might treat those cells with antibodies that recognize a protein known as CD3 to pick those out. By adding additional antibodies, they can then select different T-cell subpopulations, such as CD4-positive helper T cells and CD8-positive cytotoxic T cells, both of which help you mount immune responses.
Cells of the immune system Source: http://stemcells.nih.gov/info/scireport/chapter6.asp
In a basic flow cytometry experiment, each antibody is labeled with a unique fluorescent dye –- the antibody targeting CD3 might be red, say, and the CD4 antibody, green. The cells stream past a laser, one by one. The laser (or lasers –- there can be as many as seven) excites the dye molecules decorating the cell surface, causing them to fluoresce. Detectors capture that light and give a count of how many total cells were measured and the types of cells. The result is a kind of catalog of the cell population. For immune cells, for example, that could be the number of T cells, B cells (which, among other things, help you “remember” previous invaders), and macrophages (the big cells that chomp up invaders and infected cells). By comparing the cellular catalogs that result under different conditions, researchers gain insight into development, disease, and the impact of drugs, among other things.
But here’s the problem: Fluorescent dyes aren’t lasers, producing light of exactly one particular color. They absorb and emit light over a range of colors, called a spectrum. And those spectra can overlap, such that when a researcher thinks she’s counting CD4 T cells, she may actually be counting some macrophages. That overlap leads to all sorts of experimental optimization issues. An exceptionally talented flow cytometrist can assemble panels of perhaps 12 or so dyes, but it might take months to get everything just right.
That’s where the mass cytometry comes in. Commercialized by DVS Sciences, mass cytometry is essentially the love-chid of flow cytometry and mass spectrometry, combining the one-cell-at-a-time analysis of the former with the atomic precision of the latter. Mass spectrometry identifies molecules based on the ratio of their mass to their charge. In DVS’ CyTOF mass cytometer, a flowing stream of cells is analyzed not by shining a laser on them, but by nuking them in superhot plasma. The nuking reduces the cell to its atomic components, which the CyTOF then measures.
Specifically, the CyTOF looks for heavy atoms called lanthanides, elements found in the first of the two bottom rows of the periodic table, like gadolinium, neodymium, and europium. These elements never naturally occur in biological systems and so make useful cellular labels. More to the point, the mass spectrometer is specific enough that these signals basically don’t overlap. The instrument will never confuse gadolinium for neodymium, for instance. Researchers simply tag their antibodies with lanthanides rather than fluorophores, and voila! Instant antibody panel, no (or little) optimization required.
Periodic Table of Cupcakes, with lanthanides in hot pink frosting. Source: http://www.buzzfeed.com/jpmoore/the-periodic-table-of-cupcakes
Now back to the paper. Nolan (who sits on DVS Sciences’ Scientific Advisory Board) and Bodenmiller wanted to see if mass cytometry could provide the sort of high-density, high-throughput cellular profiling that is required for drug development. The team took blood cells from eight donors, treated them with more than two dozen different drugs over a range of concentrations, added a dozen stimuli to which blood cells can be exposed in the body, and essentially asked, for each of the pathways we want to study, in each kind of cell in these patients’ blood, what did the drug do?
To figure that out, they used a panel of 31 lanthanides –- 10 to sort out the cell types they were looking at in each sample, 14 to monitor cellular signaling pathways, and 7 to identify each sample.
I love that last part, about identifying the samples. The numbers in this experiment are kind of staggering: 12 stimuli x 8 doses x 14 cell types x 14 intracellular markers per drug, times 27 drugs, is more than half-a-million pieces of data. To make life easier on themselves, the researchers pooled samples 96 at a time in individual tubes, adding a “barcode” to uniquely identify each one. That barcode (called a “mass-tag cellular barcode,” or MCB) is essentially a 7-bit binary number made of lanthanides rather than ones and zeroes: one sample would have none of the 7 reserved markers (0000000); one sample would have one marker (0000001); another would have another (0000010); and so on. Seven lanthanides produce 128 possible combinations, so it’s no sweat to pool 96. They simply mix those samples in a single tube and let the computer sort everything out later.
This graphic summarizes a boatload of data on cell signaling pathways impacted by different drugs. Credit: (c) 2012 Nature America [Nat Biotechnol, 30:858–67, 2012]
When all was said and done, the team was able to draw some conclusions about drug specificity, person-to-person variation, cell signaling, and more. Basically, and not surprisingly, some of the drugs they looked at are less specific than originally thought -– that is, they affect their intended targets, but other pathways as well. That goes a long way towards explaining side effects. But more to the point, they proved that their approach may be used to drive drug-screening experiments.