Towards better drug development, fewer side effects?

You may have had the experience: A medication you and a friend both take causes terrible side effects in you, but your friend experiences none. (The running joke in our house is, if a drug has a side-effect, we’ve had it.) How does that happen, and why would a drug that’s meant to, say, stabilize insulin levels, produce terrible gastrointestinal side effects, too? A combination of techy-tech scientific approaches might help answer those questions for you — and lead to some solutions.

It’s no secret I love lab technology. I’m a technophile. A geek. I call my web site “Biotechnically Speaking.” So when I saw this paper in the September issue of Nature Biotechnology, well, I just had to write about it.

The paper is entitled, “Multiplexed mass cytometry profiling of cellular states perturbed by small-molecule regulators.” If you read that and your eyes glazed over, don’t worry –- the article is way more interesting than its title. 

Those trees on the right are called SPADE trees. They map cellular responses to different  stimuli in a collection of human blood cells. Credit: (c) 2012 Nature America [Nat Biotechnol, 30:858--67, 2012]
Here’s the basic idea: The current methods drug developers use to screen potential drug compounds –- typically a blend of high-throughput imaging and biochemical assays – aren’t perfect. If they were, drugs wouldn’t fail late in development. Stanford immunologist Garry Nolan and his team, led by postdoc Bernd Bodenmiller (who now runs his own lab in Zurich), figured part of that problem stems from the fact that most early drug testing is done on immortalized cell lines, rather than “normal” human cells. Furthermore, the tests that are run on those cells aren’t as comprehensive as they could be, meaning potential collateral effects of the compounds might be missed. Nolan wanted to show that flow cytometry, a cell-analysis technique frequently used in immunology labs, can help reduce that failure rate by measuring drug impacts more holistically. 


Nolan is a flow cytometry master. As he told me in 2010, he’s been using the technique for more than three decades, and even used a machine now housed in the Smithsonian.


In flow cytometry, researchers treat cells with reagents called antibodies, which are immune system proteins that recognize and bind to specific proteins on cell surfaces. Each type of cell has a unique collection of these proteins, and by studying those collections, it is possible to differentiate and count the different populations.


Suppose researchers wanted to know how many T cells of a specific type were present in a patient’s blood. They might treat those cells with antibodies that recognize a protein known as CD3 to pick those out. By adding additional antibodies, they can then select different T-cell subpopulations, such as CD4-positive helper T cells and CD8-positive cytotoxic T cells, both of which help you mount immune responses.


Cells of the immune system
Source: http://stemcells.nih.gov/info/scireport/chapter6.asp
In a basic flow cytometry experiment, each antibody is labeled with a unique fluorescent dye –- the antibody targeting CD3 might be red, say, and the CD4 antibody, green. The cells stream past a laser, one by one. The laser (or lasers –- there can be as many as seven) excites the dye molecules decorating the cell surface, causing them to fluoresce. Detectors capture that light and give a count of how many total cells were measured and the types of cells. The result is a kind of catalog of the cell population. For immune cells, for example, that could be the number of T cells, B cells (which, among other things, help you “remember” previous invaders), and macrophages (the big cells that chomp up invaders and infected cells). By comparing the cellular catalogs that result under different conditions, researchers gain insight into development, disease, and the impact of drugs, among other things.


But here’s the problem: Fluorescent dyes aren’t lasers, producing light of exactly one particular color. They absorb and emit light over a range of colors, called a spectrum. And those spectra can overlap, such that when a researcher thinks she’s counting CD4 T cells, she may actually be counting some macrophages. That overlap leads to all sorts of experimental optimization issues. An exceptionally talented flow cytometrist can assemble panels of perhaps 12 or so dyes, but it might take months to get everything just right.


That’s where the mass cytometry comes in. Commercialized by DVS Sciences, mass cytometry is essentially the love-chid of flow cytometry and mass spectrometry, combining the one-cell-at-a-time analysis of the former with the atomic precision of the latter. Mass spectrometry identifies molecules based on the ratio of their mass to their charge. In DVS’ CyTOF mass cytometer, a flowing stream of cells is analyzed not by shining a laser on them, but by nuking them in superhot plasma. The nuking reduces the cell to its atomic components, which the CyTOF then measures.

Specifically, the CyTOF looks for heavy atoms called lanthanides, elements found in the first of the two bottom rows of the periodic table, like gadolinium, neodymium, and europium. These elements never naturally occur in biological systems and so make useful cellular labels. More to the point, the mass spectrometer is specific enough that these signals basically don’t overlap. The instrument will never confuse gadolinium for neodymium, for instance. Researchers simply tag their antibodies with lanthanides rather than fluorophores, and voila! Instant antibody panel, no (or little) optimization required.

Periodic Table of Cupcakes, with lanthanides in hot pink frosting.
Source: http://www.buzzfeed.com/jpmoore/the-periodic-table-of-cupcakes
Now back to the paper. Nolan (who sits on DVS Sciences’ Scientific Advisory Board) and Bodenmiller wanted to see if mass cytometry could provide the sort of high-density, high-throughput cellular profiling that is required for drug development. The team took blood cells from eight donors, treated them with more than two dozen different drugs over a range of concentrations, added a dozen stimuli to which blood cells can be exposed in the body, and essentially asked, for each of the pathways we want to study, in each kind of cell in these patients’ blood, what did the drug do?


To figure that out, they used a panel of 31 lanthanides –- 10 to sort out the cell types they were looking at in each sample, 14 to monitor cellular signaling pathways, and 7 to identify each sample.


I love that last part, about identifying the samples. The numbers in this experiment are kind of staggering: 12 stimuli x 8 doses x 14 cell types x 14 intracellular markers per drug, times 27 drugs, is more than half-a-million pieces of data. To make life easier on themselves, the researchers pooled samples 96 at a time in individual tubes, adding a “barcode” to uniquely identify each one. That barcode (called a “mass-tag cellular barcode,” or MCB) is essentially a 7-bit binary number made of lanthanides rather than ones and zeroes: one sample would have none of the 7 reserved markers (0000000); one sample would have one marker (0000001); another would have another (0000010); and so on. Seven lanthanides produce 128 possible combinations, so it’s no sweat to pool 96. They simply mix those samples in a single tube and let the computer sort everything out later.


This graphic summarizes a boatload of data on cell signaling pathways impacted by different drugs.
Credit: (c) 2012 Nature America [Nat Biotechnol, 30:858--67, 2012]
When all was said and done, the team was able to draw some conclusions about drug specificity, person-to-person variation, cell signaling, and more. Basically, and not surprisingly, some of the drugs they looked at are less specific than originally thought -– that is, they affect their intended targets, but other pathways as well. That goes a long way towards explaining side effects. But more to the point, they proved that their approach may be used to drive drug-screening experiments.


And I get to write about it. 

Einstein's most famous equation, sort of. This is the transcription of the chalkboard from a public talk Einstein gave in Pittsburgh in 1934. (Credit: Dwight Vincent and David Topper)

Did Einstein write his most famous equation? Does it matter?

Why all the fuss about E = m c2?

By Matthew R. Francis

Albert Einstein in Pittsburgh, 1934. (Credit: Pittsburgh Sun-Telegraph/Dwight Vincent and David Topper)

The association is strong in our minds: Albert Einstein. Genius. Crazy hair. E = m c2. Maybe many people don’t know what else Einstein did, but they know about the hair and that equation. They may think he flunked math in school (wrong, though he did have conflicts with some teachers), that he was a ladies’ man (true, he had numerous affairs during both of his marriages), and that he was the smartest man who ever lived (debatable, though he certainly is one of the central figures in 20th century physics). Rarely, people will remember that he was a passionate antiracist and advocate for world government as a way of bringing peace.

Obviously whole books have been written about Einstein and E = m c2, but a blog post at io9 caught my attention recently. The post (by George Dvorsky) itself looked back to a scholarly paper by David Topper and Dwight Vincent [1], which reconstructed a public lecture Einstein gave in 1934. (All numbers in square brackets [#] are citations to the references at the end of this post.) This lecture was one of many Einstein presented over the decades, but as Topper and Vincent wrote, “As far as we know [the photograph] is the only extant picture with Einstein and his famous equation.”

Well, kind of. The photograph is really blurry, and the authors had to reconstruct what was written because you can’t actually see any of the equations clearly. Even in the reconstructed version (reproduced below)…there’s no E = m c2. Instead, as I highlighted in the image, the equation is E0 = m. Einstein set the speed of light – usually written as a very large number like 300 million meters per second, or 186,000 miles per second – equal to 1 in his chalkboard talk.

Einstein's most famous equation, sort of. This is the transcription of the chalkboard from a public talk Einstein gave in Pittsburgh in 1934. (Credit: Dwight Vincent and David Topper)

Einstein’s most famous equation, sort of. This is the transcription of the chalkboard from a public talk Einstein gave in Pittsburgh in 1934. (Credit: Dwight Vincent and David Topper)

What’s the meaning of this?

It is customary to express the equivalence of mass and energy (though somewhat inexactly) by the formula E = mc2, in which c represents the velocity of light, about 186,000 miles per second. E is the energy that is contained in a stationary body; m is its mass. The energy that belongs to the mass m is equal to this mass, multiplied by the square of the enormous speed of light – which is to say, a vast amount of energy for every unit of mass. –Albert Einstein [2]

Before I explain why it isn’t a big deal to modify an equation the way Einstein did, it’s good to remember what E = m c2 means. The symbols are simple, but they encode some deep knowledge. E is energy; while colloquially that term gets used for a lot of different things, in physics it’s a measure of the ability of a system to do things. High energy means fast motion, or the ability to make things move fast, or the ability to punch through barriers. Mass m, on the other hand, is a measure of inertia: how hard it is to change an object’s motion. If you kick a rock on the Moon, it will fly farther than it would on Earth, but it’ll hurt your foot just as much – it has the same mass and therefore inertia both places. Finally, c is the speed of light, a fundamental constant of nature. The speed of light is the same for an object of any mass, moving at any velocity.

Mass and energy aren’t independent, even without relativity involved. If you have a heavy car and a light car driving at the same speed, the more massive vehicle carries more energy, in addition to taking more oomph to start or stop it moving. However, E = m c2 means that even if a mass isn’t moving, it has an irreducible amount of energy. Because the speed of light is a big number, and the square of a big number is huge, even a small amount of mass possesses a lot of energy.

The implications of E = m c2 are far-reaching. When a particle of matter and its antimatter partner meet – say, an electron and a positron – they mutually annihilate, turning all of their mass into energy in the form of gamma rays. The process also works in reverse: under certain circumstances, if you have enough excess energy in a collision, you can create new particle-antiparticle pairs. For this reason, physicists often write the mass of a particle in units of energy: the minimum energy required to make it. That’s why we say the Higgs boson mass is 126 GeV – 126 billion electron-volts, where 1 electron-volt is the energy gained by an electron moved by 1 volt of electricity. For comparison, an electron’s mass is about 511 thousand electron-volts, and a proton is 938 million electron-volts.

In our ordinary units the velocity of light is not unity, and a rather artificial distinction between mass and energy is introduced. They are measured by different units, and energy E has a mass E/C2 where C is the velocity of light in the units used. But it seems very probable that mass and energy are two ways of measuring what is essentially the same thing, in the same sense that the parallax and distance of a star are two ways of expressing the same property of location. –Arthur Eddington [3]

Another side of the equation E = m c2 appears when we probe the structure of atomic nuclei. An atomic nucleus is built of protons and neutrons, but the total nuclear mass is different than the sum of the masses of the constituent particles: part of the mass is converted into binding energy to hold everything together. The case is even more dramatic for protons and neutrons themselves, which are made of smaller particles knowns as quarks – but the total mass of the quarks is much smaller than the proton or neutron mass. The extra mass comes from the strong nuclear force gluing the particles together. (In fact, the binding particles are known as gluons for that reason, but that’s a story for another day.)

A brief history of an idea

The E0 = m version of the equation Einstein used in his chalk-talk might seem like it’s a completely different thing. You might be surprised to know that he almost never used the famous form of his own discovery: He preferred either the chalkboard version or the form m = E/c2. In fact, in his first scientific paper on the subject (which was also his second paper on relativity), he wrote [4]:

If a body gives off the energy L in the form of radiation, its mass diminishes by L/c2. The fact that the energy withdrawn from the body becomes energy of radiation evidently makes no difference, so that we are led to the more general conclusion that … the mass of a body is a measure of its energy-content …

In other words, he originally used L for energy instead of E. However, it’s equally obvious that the meaning of E = m c2 is present in the paper. Equations, like sentences in English, can often be written in many different ways and still convey the same meaning. By 1911 (possibly earlier), Einstein was using E for energy [5], but we can use E or L or U for energy, as long as we make it clear that’s what we’re doing.

The same idea goes for setting c equal to one. Many of us are familiar with the concept of space-time: that time is joined with space (thanks to the fact that the speed of light is the same, no matter who measures it). We see the blurring of the boundary between space and time when astronomers speak of light-years: the distance light travels in one year. Because c – and therefore c2 – is a fixed number, it means the difference between mass and energy is more like the difference between pounds and kilograms: one is reachable from the other by a simple calculation. Many physicists, including me, love to use c = 1 because it makes equations much easier to write.

In fact, physicists (including Einstein) rarely use E = m c2 or even m = E/c2 directly. When you study relativity, you find those equations are specific forms of more general expressions and concepts. To wit: The energy of a particle is only proportional to its mass if you take the measurement while moving at the same speed as the particle. Physical quantities in relativity are measured relative to their state of motion – hence the name.

That’s the reason I don’t care that we don’t have a photo of Einstein with his most famous equation, or that he didn’t write it in its familiar form in the chalk-talk. The meaning of the equation doesn’t depend on its form; its usefulness doesn’t derive from Einstein’s way of writing it, or even from Einstein writing it.

A small representative sample of my relativity books, with my cats Pascal and Harriet for scale.

A small representative sample of my relativity books, with my cats Pascal and Harriet for scale.

Even more: Einstein is not the last authority on relativity, but the first. I counted 64 books on my shelves that deal with the theory of relativity somewhere in their pages, and it’s possible I missed a few. The earliest copyright is 1916 [6]; the most recent are 2012, more than 50 years after Einstein’s death. The level runs from popular science books (such as a couple of biographies) up to graduate-level textbooks. Admittedly, the discussion of relativity may not take up much space in many of those books – the astronomy and math books in particular – but the truth is that relativity permeates modern physics. Like vanilla in a cake, it flavors many branches of physics subtly; in its absence, things just aren’t the same.

References

  1. David Topper and Dwight Vincent, Einstein’s 1934 two-blackboard derivation of energy-mass equivalence. American Journal of Physics75 (2007), 978. DOI: 10.1119/1.2772277 . Also available freely in PDF format.
  2. Albert Einstein, E = mc2. Science Illustrated (April 1946). Republished in Ideas and Opinions (Bonanza, 1954).
  3. Arthur Eddington, Space, Time, and Gravitation (Cambridge University Press, 1920).
  4. Albert Einstein, Does the inertia of a body depend upon its energy-content? (translated from Ist die Trägheit eines Körpers von seinem Energiegehalt abhängig?). Annalen der Physic17 (1905). Republished in the collection of papers titled The Principle of Relativity (Dover Books, 1953).
  5. Albert Einstein, On the influence of gravitation on the propagation of light (translated from Über den Einfluss der Schwerkraft auf die Ausbreitung des Lichtes). Annalen der Physic35 (1911). Republished in The Principle of Relativity.
  6. Albert Einstein, Relativity: The Special and the General Theory (1916; English translation published by Crown Books, 1961).

So What’s the Big Deal About the Higgs Boson, Anyway? A Physics Double Xplainer

The ATLAS detector at the Large Hadron Collider, one of four
detectors to discover a new particle.
By Matthew Francis, physics editor

After decades of searching and many promising results that didn’t pan out, scientists working at the Large Hadron Collider in Europe announced Wednesday they had found a new particle. People got really excited, and for good reason! This discovery is significant no matter how you look at it: If the new particle is the Higgs boson (which it probably is), it provides the missing piece to complete the highly successful Standard Model of particles and interactions. If the new particle isn’t the Higgs boson, well…that’s interesting too.

So what’s the big deal? What is the Higgs boson? What does Wednesday’s announcement really mean? What’s the meaning of life? Without getting too far over my head, let me try to answer at least some of the common questions people have about the Higgs boson, and what the researchers in Europe found. If you’d rather have everything in video form, here’s a great animation by cartoonist Jorge Cham and an elegant explanation by Ian Sample. Ethan Siegel also wrote a picture-laden joyride through Higgs boson physics; you can find a roundup of even more posts and information at Wired and at Boing-Boing. (Disclaimer: my own article about the Higgs is linked both places, so I may be slightly biased.)

Q: What is the Higgs boson?
A: The Higgs boson is a particle predicted by the Standard Model. It’s a manifestation of the “Higgs field”, which explains why some particles have mass and other particles don’t.

Q: Whoa, too fast! What’s a boson?
A: A boson is a large mammal indigenous to North America. No wait, that’s bison. [Ed note: Ha. Ha. Ha.] On the tiniest level, there are two basic types of particles: fermions and bosons. You’re made of fermions: the protons, neutrons, and electrons that are the constituents of atoms are all fermions. On a deeper level, protons and neutrons are built of quarks, which are also fermions. Bosons carry the forces of nature; the most familiar are photons–particles of light–which are manifestations of the electromagnetic force. There are other differences between fermions and bosons, but we don’t need to worry about them for now; if you want more information, I wrote a far longer and more detailed explanation at my personal blog.

Q: What does it mean to be a “manifestation” of a force?
A: The ocean is a huge body of water (duh), but it’s always in motion. You can think of waves as manifestations of the ocean’s motion. The electromagnetic field (which includes stuff like magnets, electric currents, and light) manifests itself in waves, too, but those waves only come in distinct indivisible chunks, which we call photons. The Higgs boson is also a manifestation of a special kind of interaction.

Q: How many kinds of forces are there?
A: There are four fundamental forces of nature: gravity, electromagnetism, and the two nuclear forces, creatively named the weak and strong forces. Gravity and electromagnetism are the forces of our daily lives: Gravity holds us to Earth, and electromagnetism does nearly everything else. If you drop a pencil, gravity makes it fall, but your holding the pencil is electromagnetic, based on how the atoms in your hand interact with the atoms in the pencil. The nuclear forces, on the other hand, are very short-range forces and are involved in (wow!) holding the nuclei of atoms together.

Q: OK, so what does the Higgs boson have to do with the fundamental forces?
A: All the forces of nature have certain things in common, so physicists from Einstein on have tried to describe them all as aspects of a single force. This is called unification, and to this day, nobody has successfully accomplished it. (Sounds like a metaphor for something or other.) However, unification of electromagnetism with the weak force was accomplished, yielding the electroweak theory. Nevertheless, there was a problem in the first version: It simply didn’t work if electrons, quarks, and the like had mass. Because particles obviously do have mass, something was wrong. That’s where the Higgs field and Higgs boson come in. Scottish physicist Peter Higgs and his colleagues figured out that if there was a new kind of field, it could explain both why the electromagnetic force and weak force behave differently and provide mass to the particles.

Q: Wait, I thought mass is fundamental?
A: One of the insights of modern physics is that particles aren’t just single objects: They are defined by interactions. Properties of particles emerge out of their interactions with fields, and mass is one of those properties. (That makes unifying gravity with the other forces challenging, which is a story for another day!) Some particles are more susceptible to interacting with the Higgs. An analogy I read (and apologies for not remembering where I read it) says it’s like different shoes in the snow. A snowshoe corresponds to a low-mass particle: very little snow mass sticks to it. A high-mass particle interacts strongly with the Higgs field, so that’s like hiking boots with big treads: lots of places for the snow to stick. Electrons are snowshoes, but the heaviest quarks are big ol’ hiking boots.

Q: Are there Higgs bosons running around all over the place, just like there are photons everywhere?
A: No, and it’s for the same reason we don’t see the bosons that carry the weak force. Unlike photons, the Higgs boson and the weak force bosons (known as the W and Z bosons — our particle physics friends run out of creative names sometime) are relatively massive. Many massive particles decay quickly into less massive particles, so the Higgs boson is short lived.

Q: So how do you make a Higgs boson?
A: The Higgs field is everywhere (like The Force in Star Wars), but to make a Higgs boson, you have to provide enough energy to make its mass. Einstein’s famous formula E = mc^2 tells us that mass and energy are interchangeable: If you have enough energy (in the right environment), you can make new particles. The Large Hadron Collider (LHC) at CERN in Europe and the Tevatron at Fermilab in the United States are two such environments: Both accelerate particles to close to the speed of light and smash them together. If the collisions are right, they can make a Higgs boson.

Q: Is this new particle actually the Higgs boson then?
A: That’s somewhat tricky. While the Standard Model predicts the existence of a Higgs boson, it doesn’t tell us exactly what the mass should be, which means the energy to make one isn’t certain. However, we have nice limits on the mass the Higgs could have, based on the way it interacts with other particles like the other bosons and quarks. This new particle falls in that range and has other characteristics that say “Higgs.” This is why a lot of physics writers, including me, will say the new particle is probably the Higgs boson, but we’ll hedge our bets until more data come in. The particle is real, though: four different detectors (ATLAS and CMS at CERN, and DZero and CDF at Fermilab) all saw the same particle with the same mass.

Q: But I’m asking you as a friend: Is this the Higgs boson?
A: I admit: a perverse part of me hopes it’s something different. If it isn’t the Higgs boson, it’s something unexpected and may not correspond to anything predicted in any theory! That’s an exciting and intriguing result. However, my bet is that this is the Higgs boson, and many (if not most) of my colleagues would agree.

Q: What’s all this talk about the “God particle”?
A: Physicists HATE it when the Higgs boson is called “the God particle.” Yes, the particle is important, but it’s not godlike. The term came from the title of a book by physicist Leon Lederman; he originally wanted to call it “The Goddamn Particle”, since the Higgs boson was so frustrating to find, but his editor forced a change.

Q: Why should I, as a non-physicist, care about this stuff?
A: While it’s unlikely that the discovery of the Higgs boson will affect you directly, particle colliders like the LHC and Tevatron have spurred development of new technologies. However, that’s not the primary reason to study this. By learning how particles work, we learn about the Universe, including how we fit into it. The search for new particles meshes with cosmology (my own area): It reveals the nature of the Universe we inhabit. I find a profound romance in exploring our Universe, learning about our origins, and discovering things that are far from everyday. If we limit the scope of exploration only to things that have immediate practical use, then we might as well give up on literature, poetry, movies, religion, and the like right now.

Q: If this is the Higgs boson, is that the final piece of the puzzle? Is particle physics done?
A: No, and in fact bigger mysteries remain. The Higgs boson is predicted by the Standard Model, but we know 80% of the mass of the Universe is in the form of dark matter, stuff that doesn’t emit or absorb light. We don’t know exactly what dark matter is, but it’s probably a particle — which means particle colliders may be able to figure it out. Hunting for an unknown particle is harder than looking for one we are pretty sure exists. Finding the Higgs (if I may quote myself) is like The Hobbit: It’s a necessary tale, but the bigger epic of The Lord of the Rings is still to come.

Mental illness, autism, and mass murder, and why Joe Scarborough needs to stop talking

Via Wikimedia Commons. Public domain. 

[Ed. note: Some of this information comes from a post that previously appeared at The Biology Files following the shooting massacre in Arizona targeting U.S. Rep. Gabrielle Giffords, among others.]

By Emily Willingham 

Today, Joe Scarborough at MSNBC warned viewers not to generalize about the horrific events in Aurora, CO, and then proceeded to opine that the killer in question was “on the autism scale.” I’m not exactly sure what “on the autism scale” means, as I’ve never in all my years of involvement in the autism community come across such a device, but many of us in that community were waiting–nay, expecting–something like this almost from the minute we learned who had committed these murders. Too bad it came from a parent member of that community.

Hey, Joe, you’ve got a gun in your hand, and it’s not like the one that the who-knows-what-his-disorder-is murderer in Aurora used. No. Your weapon is of a more subtle nature, and you wield it from a venue that reaches millions of people who don’t know that the ammo you’re firing is empty bullshit. But that bullshit ends up smearing the autistic community as violent criminals capable of all manner of psychotic behavior, including the taking of innocent lives and the well-planned rigging of an apartment building with dangerous explosives. And you must understand this on some level, as you have a son who is on the autism spectrum.

Here’s the thing, Joe. You’re conflating what can be very personal, nonfatal aggression of an overwhelmed autistic person with the wanton and willful and carefully planned destruction of total strangers in a crowded theater. Yes, some autistic people are aggressive, in the moment, in response to a moment, to being overwhelmed and not understood, to being mishandled and misused. That sort of aggression is a very, very different animal from the sort of cold, calculated malevolence that leads a young man to inflict tragedy across a large swath of humanity, total strangers to him, arriving with a measured burst of deadly force before calmly surrendering himself to authorities. You, Joe Scarborough, see that behavior as somehow “on the autism scale.” Anyone who has even a mild grasp of autism knows how very far from reality that kind of behavior is for an autistic person. 

So let’s talk about violence. 
A look at the violence literature reveals two rough categories of violent brain and genetics: the brain of the impulsively or hostilely violent and the brain of the proactive, or instrumentally violent–the one who carefully plans the violent act, rather than committing it in the heat of the moment. Impulsive violence, thanks to its unpredictability and relative ubiquity, seems to get the bulk of the attention. Proactive violence, which encompasses the planned violence of war, is a different animal altogether. And psychopathic instrumental violence may well be the most terrifying of them all. The two appear to have very different underlying mechanisms and origins, as well:

Biological models of violence have identified distinct neural patterns that characterize each type of violence. For example, the “low-arousal” aggressor more likely to commit instrumental violence is underreactive and responds sluggishly to stressors. In contrast, the “high-arousal” aggressor who is more prone to hostile violence tends to be hypervigiliant and easily frustrated 
In humans, instrumental aggression is roughly analogous to predatory aggression although it is limited to intraspecies behavior….Similarly, emotional or hostile aggression in humans could be considered the analogue of defensive aggression in response to a threat or perceived threat.

No one–and I mean, no one–has a clue what drove this man to commit his heinous crimes. What we do know is that he planned his hellish introduction into our psyches for months beforehand, carefully accumulating all the accouterments needed to generate a national and personal nightmare. What we also know is that he carefully planned his violent act; it was not, like an autistic meltdown, an act of the moment, an unplanned reaction

And you’re wrong on some other counts as well, demonstrating the real dangers of a weapon like yours in the hands of the uninformed. You said that the minute you heard about the shooting, you knew it would be young white male, probably from “an affluent neighborhood.” While being young, white, and male may fit the profile of many serial killers, mass murders are a different breed. They come from different backgrounds and ethnicities, but most share a single motivation: revenge. When they go beyond personal connections in their targets and kill total strangers, that revenge is usually against a society the killer thinks has wronged him. 

Other features in common are being male, being a “loner,” and feeling alienated from the world. For the record, “autistic” does not equate with “loner” or “male,” as much as you or the news media would like to distort it into that mold. Research, such as it is, suggests that the more a killer goes impersonal and targets strangers, the more likely a mental illness is to be involved. While that mental illness is usually paranoid schizophrenia, we must all remember that there are many, many more murderers in this world who are not schizophrenic than there are schizophrenics who commit this kind of violence. The coupling is not inevitable or even common. Indeed, better predictors of violence are unemployment, physical abuse, and recent divorce. The killer in the Aurora case had recently in effect become unemployed, having left graduate school and done poorly on spring exams. 

I’ll close with this final observation: Autism is a disorder that is present from birth or very soon after. There are, however, other mental disorders and mental breaks that occur, particularly in young men and particularly at vulnerable developmental periods like adolescence and early adulthood. Not only does autism not fit here simply by virtue of its lifelong presence, but also, it’s not something that just kinda shows up when a man turns 24 years old. 

The man who destroyed so many lives showed several signs of extreme stress prior to his murderous rampage. Were these stressors the trigger for him? That I cannot say. But I can say that stress does not bring on autism in one’s 20s, and autism at any age doesn’t lead to carefully calculated revenge killings of innocent strangers. So, Joe, why don’t you just put down your weapon and back away… as quickly as you can.

These views are the opinion of the author and do not necessarily either reflect or disagree with those of the DXS editorial team. 

Crowdfunding on the Brain: Finding Biomarkers for Early Autism Diagnosis

By Biology Editor, Jeanne Garbarino


If a child is diagnosed with autism spectrum disorder (ASD), it is because they have gone through a number of rigorous behavioral tests, often over a period of time, and never straightforward. Of course, this time can be a stressful for parents or caregivers, and sometimes the answers can lead to even more questions. One solution to the waiting and uncertainty would be to have a medical test that could more easily diagnose ASD. However, no one has been able to identify biomarkers – molecules in the body that can help define a specific medical condition – for the condition. Without this type of information, it is not possible to create a diagnostic test for autism.


Having been through this process with their son, who is on the autism spectrum, Clarkson University scientists Costel Darie and Alisa Woods have decided to work together to help address this issue. An interdisciplinary laboratory that combines hardcore proteomics (the study of the proteins we make) with cognitive neuroscience is probably not what you think of when it comes to running a family business. But for Darie and Woods, “marriage” has many meanings. This husband and wife team has combined their brainpower to embark on a scientific journey toward understanding some of the biochemistry behind autism, and they are walking on an increasingly popular path to help finance their work: crowdfunding.


A major goal of the Darie Lab is to identify biomarkers that are associated with autism and then to create a medical test to help alleviate some of the frustrations that come with the ASD diagnostic process. Using a technology called high-definition mass spectrometry, the Darie Lab has outlined a project to figure out the types of proteins that are in the saliva or blood of children with ASD and compare these protein profiles to the saliva or blood from children who are not on the autism spectrum. If the Darie Lab is successful, they might be able to help create a diagnostic test for early autism detection, which would undoubtedly fill a giant void in the field of autism research and treatment.


Here is how the experiment will work: The members of the Darie Lab will collect saliva (and/or blood) samples from children, half of whom are on the autism spectrum and half of whom are not. The researchers will prepare the saliva or blood and collect the proteins. Each protein will be analyzed by a high definition mass spectrometer, which is basically a small scale for measuring the weight and charge of a protein. The high definition mass spectrometer will transfer information about the proteins to a computer, with special software allowing the Darie Lab investigators to figure out the exact makeup of proteins in each sample.


The bottleneck when it comes to these experiments is not getting samples (saliva and blood are easy to collect), and it isn’t the high-tech high-definition mass spectrometer because they have access to one.  Rather, the bottleneck comes from the very high cost of the analytical software they need. Because this software was not included in their annual laboratory budget but is critical to conducting this experiment, the Darie Lab is raising money through crowdfunding.


Why I think a contribution is worth the investment: Technology is always advancing, especially when it comes to protein biochemistry. The high-definition mass spectrometer is a recent technology, and according to the Darie Lab, they have been able to identify over 700 proteins in the saliva alone. This is quite an incredible step up from traditional mass spectrometers, which could detect only around 100 proteins in saliva. Just because we haven’t been able to identify biomarkers for autism in the past doesn’t mean we can’t do it now. 

In addition to the use of this new technology, the Darie Lab presents some compelling preliminary evidence for a difference in protein profiles between those with ASD and those who do not have ASD. While they’ve examined only three autistic people and compared them to three non-ASD individuals, the two groups were clearly distinct in their saliva protein profiles. If this pattern holds up with an increased number of study participants, the implications could be quite significant for autism research.      
Preliminary data from the Darie Lab shows that there are saliva proteins showing a 20X or greater
difference  between ASD (ovals) versus sibling non-ASD controls (rectangles).

If you decide to kick in some funds, your good deed will not go unrewarded. As a thank-you for contributing, the Darie Lab has offered up a few cool perks, including high-quality prints of microscopic images in the brain. 



If you are looking for a good cause, look no further. I am excited to see how the Darie Lab crowdfund experience goes, and I wish them all the best in their quest, both as professionals and as parents.  To find out more, or to make a donation, visit the Darie Lab RocketHub page.

Fluorescent images of the brain, available to those donating $100 or more.
The opinions expressed in this post do not necessarily agree or conflict with those of the DXS editorial team and contributors.

Biology Explainer: The big 4 building blocks of life–carbohydrates, fats, proteins, and nucleic acids

The short version
  • The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
  • Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
  • Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
  • Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.                                                                                                      
  • The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.

                                                  

Big Molecules with Small Building Blocks

The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.

We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.
Carbohydrates

You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.

When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.

Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.

The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.

Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.

On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.

The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!

If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.

The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?

If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.

In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.

Sugar and Fuel

A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.

Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.

Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.

Polysaccharides: Fuel and Form

Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.

Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.

Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.

Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.

The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.

Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.

The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.

That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.

These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.

Lipids: The Fatty Trifecta

Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.

Fats: the Good, the Bad, the Neutral

Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?

Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows.  Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.

Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.

Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.

Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.

The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.

You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.

In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.

A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.

Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.

Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.

Phospholipids: An Abundant Fat

You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.

Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.

There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.

Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.

The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.

Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.

Steroids: Here to Pump You Up?

Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.

But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.

Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.

Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.

Proteins

As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.

Levels of Structure

Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.

For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.

This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.

Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.

The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.

In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.

A Plethora of Purposes

What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.

As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.

Nucleic Acids

How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.

Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.

DNA vs. RNA: A Matter of Structure

DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.

So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.

RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.

DNA vs. RNA: Function Wars

An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.

These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.

RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.


 By Emily Willingham, DXS managing editor 
This material originally appeared in similar form in Emily Willingham’s Complete Idiot’s Guide to College Biology

Is it really healthier to be a few pounds overweight? That’s not what the study says.

Don’t start making plans to ignore those extra pounds just yet.

by Jennifer Gunter, MD, FRCS(C), FACOG, DABPM

This post first appeared at Dr. Gunter’s blog, where she wields the lasso of truth.

A new study published in the Journal of the American Medical Association (JAMA) indicates that a body mass index or BMI of 25-29.9 (overweight) is associated with the lowest risk of death and that class 1 obesity (BMI 30-34.9) is not associated with an increased risk of mortality. As this study hit the presses January 2nd (and I’m sure no editorial thought was given by JAMA to such a study coming out at the first of the year) when many people are thinking about weight loss resolutions, it was covered widely in the press and I read several op-eds claiming vindication for obesity. One op-ed on a major news site was indignant that CT scanners couldn’t accommodate a friend (some CT scanners have difficulty accommodating patients over 300 lbs). The author’s solution? Build bigger CT scanners because obesity isn’t bad at all. This new study proves it.

First of all the study doesn’t say that being overweight is good for you and that being an ideal weight is bad. What the study does tell us is that people who have a BMI of 35 or greater are more likely to die. This is not new information. A BMI of 35 is a lot of extra weight, depending on your height it could easily mean 70 extra pounds or more.15% of Americans have a BMI of 35 or greater.Only people with a BMI over 35, way over 35, need bigger CT scanners. I’m not saying that severely obese people shouldn’t have access to imaging studies, but the answer to the epidemic of severe obesity is not to claim vindication based on the inaccurate interpretation of one study and simply build bigger equipment.

What about the lower risk of death in the overweight and class 1 obesity groups compared with the normal BMI group? Well, this can be explained by a variety of factors:

  • The wrong control group. Many researchers question whether the control group should really be a BMI of 22-24.9, not the wider range of 18.5-24.9 used in this study. The reason, many people at the thinner end of the scale are thin because of illness and this obviously skews mortality statistics.
  • BMI is an imperfect tool with which to predict mortality when the result isn’t one extreme (< 18.5) or the other (>34.9). This is not a new finding. BMI just looks at weight, not the proportion of weight that is muscle mass vs. fatty tissue. Many people with a normal BMI have very little muscle mass and thus are carrying around excess fat and are less healthy than their BMI suggests. There are better metrics to look at mortality risk for people who have a BMI in the 18.5-34.9 range, such as waist circumference, resting heart rate, fasting glucose, leptin levels, and even DXA scans (just to name a few). The problem is that not all these measurement tools are practical on a large-scale.
  • A small amount of fat may provide an extra energy reserve for someone who becomes chronically ill, thus skewing the survival stats. For example, consider the dramatic weight loss associated with chemo…if you can’t eat due to extreme nausea and you have a little extra fat then you burn fat, but if you have no fat and can’t eat then you start breaking down muscle. This is a phenomenon has popped up in a few studies and definitely requires more research, because obesity is definitely associated with worse outcomes in many cancers.
  • Not all fat is created equal. Belly fat, the metabolically active muffin top, is what contributes to diabetes and other inflammatory conditions. Having a few extra pounds around the middle is far worse than having a few extra pounds on the hips. Again, not new information. BMI doesn’t distinguish between belly fat and thigh fat.

What is very important is that we don’t take erroneous messages from this study (hello, health reporters for major news outlets looking for attention-grabbing headlines). This study says nothing more than we need better tools than BMI to assess mortality risk for people who have a body mass index between 18.5 and 34.9 and that BMI doesn’t predict “ideal weight,” it only tells us that extremes are bad. This study also confirms that the 15% of Americans with a BMI of 35 are at increased risk of dying prematurely, a point sadly missed by many.

Body mass index simply doesn’t convey enough information to assess mortality risk for 85% of the population, but that fact (which isn’t new) shouldn’t stop each and every one of us from striving everyday to be the healthiest that we can be.

Dr. Jennifer Gunter is an OB/GYN and a pain medicine physician who has authored the book, The Preemie Primer a guide for parents of premature babies. In addition to her academic publications, her writing has appeared in USA Today, the A Cup of Comfort series, KevinMD.com, EmpowHer.com, Exceptional Parent, Parents Press, Sacramento Parent, and the Marin Independent Journal. Continue reading

Shmeat and Potatoes: The dinner of the future?

By Jeanne Garbarino, Biology Editor


(Source)

“Meatloaf, beatloaf, double s[h]meatloaf…”  Was little Randy on to something?
Food engineering has been on an incredibly strange journey, but there is none stranger (at least to me) than the concept of in vitro meat.  Colloquially referred to as “shmeat,” a term born out of mashing up the phrase “sheets of meat,” in vitro meat may be available in our grocer’s refrigerator section in just a few years.  But how exactly is shmeat produced and how does it compare to, you know, that which is derived from actual animals?  Here, I hope to shed some light on this petri dish to kitchen dish phenomenon.

The shmeaty deets

When it comes to producing shmeat, scientists are taking advantage the extensive cell culture technologies that have been developed over the course of the 20th century (for a brief history of these developments, check this out).  Because of what we have learned, we can easily determine the conditions under which cells grow best, and swiftly turn a few cells into a few million cells.  However, things can get a little tricky when growing complex, three-dimensional tissues like steak or boneless chicken breast.

(Source)

For instance, lets consider a living, breathing cow.  Most people seem to enjoy fancy cuts like beef tenderloin, which, before the butcher gets to it, is located near the back of the cow.  In order for that meat to be nice and juicy, it needs to have enough nutrients and oxygen to grow.  In addition, muscles (in this case, the tenderloin) need stimulation, and in the cow (and us too!) that is achieved by flexing and relaxing.

If shmeat is to be successfully engineered, scientists need to replicate all of the complexities that occur during the normal life of an actual animal.  While the technology for making shmeat is still being optimized, the components involved in this meat-making scheme successfully address many of the major issues with growing whole tissues in a laboratory. 

The first step in culturing meat is to get some muscle cells from an animal.  Because cells divide as they grow, a single animal could, in theory, provide enough cells to make meat for many, many people – and for a long period of time.  However, the major hurdle is creating a three-dimensional tissue, you know, something that would actually resemble a steak. 

Normally, cells will grow in a single layer on a petri dish, with a thickness that can only be measured by using a microscope.  Obviously that serving size would not be very satisfying.  In order to create that delicious three-dimensional look, feel, and taste, and be substantial enough to count as a meal, scientists have developed a way to grow the muscle cells on scaffold made of natural and edible material.  As sheets of cells grow on these scaffolds, they are laid on top of each other to bulk up the shmeat (hence “sheets of meat”).  But, in order for the cells on the inside of this 3D mass to grow as well as the cells on the outside, there has to be an sufficient way to deliver nutrients and oxygen to all cells. 

Back to the tenderloin – when it is still in the cow, the cells that make up this piece of meat are in close contact to a series of veins, arteries, and capillaries.  Termed vasculature, this system allows for the cells to obtain nutrients and oxygen, while simultaneously allowing cells to dump any waste into the blood stream.  There are some suggestionsthat the shmeat can be vascularized (grown such that a network of blood vessels are formed); however, the nutrient delivery system most widely used at this point is something called a bioreactor

A Bioreactor (Source)

This contraption is designed to support biologically active materials and how it works is actually quite cool.  The cells are placed in the cylindrical bioreactor, which spins at a rate that balances multiple physical forces, which keep the entire cell mass fully submerged in liquid growth medium at all times.  This growth medium is constantly refreshed, ensuring that the cells are always supplied with a maximum level of growth factors.  In essence, the shmeat is kept in a perpetual free fall state while it grows.         

But there is one last piece to the meat-growing puzzle, and that is regular exercise.  If we look at meat on a purely biological level, we would see that it is just a series of cells arranged to form muscle tissue.  Without regular stimulation, muscles will waste away (atrophy).  Clearly, wasting shmeat would not be very efficient (or tasty).  So, shmeat engineers have reduced the basic biological process involved with muscle stimulationto the most basic components – mechanical contraction and electrical stimulation.  Though mechanical contraction (the controlled stretching and relaxing of the growing muscle fibers) has been shown to be effective, it is not exactly feasible on a large scale.  Electrical stimulation – the process of administering regular electrical pulses to the cells – is actually more effective than mechanical contraction and can be widely performed.  Therefore, it seems to be a more viable option for shmeat production.    

Why in the world would we grow meat in a petri dish?

Grill it, braise it, broil it, roast it – as long as it tastes good, most people don’t usually question the origins of their meat.  Doing so could easily make one think twice about what they are eating.  Traditionally speaking, every slab of meat begins with a live animal – cow, pig, lamb, poultry (yes, despite what my grandmother says, this vegetarian does consider chicken to be meat) – with each animal only being able to provide a finite number of servings.  While shmeat does ultimately begin with a live animal, only a few muscle, fat, and other cells are required.

Given the theoretical amount that can be produced with just a few cells, the efficiency of traditional meat-generating farms and slaughterhouses is becoming increasingly scrutinized.  There are obvious costs – economic, agricultural, environmental – that are associated with livestock, and it has been proposed(article behind dumb pay wall, grrrr….) that shmeat engineering would substantially cut these costs.  For instance, it has been projected that shmeat production could use up to 45% less energy, compared to traditional farming methods.  Furthermore, relative to the current meat production process, culturing shmeat would use 99% less land, 82-96% less water, and would significantly reduce the amount of greenhouse gasesproduced. 

The impact of shmeat compared to tradtional agricultural processes.
(Environ. Sci. Technol., 2011, 45 (14), pp 6117–6123)

But the potential benefits of making the shift toward shmeat (as opposed to meat) doesn’t stop with its positive environmental impact.  From a nutritional standpoint, it is possible to produce shmeat in a way that would significantly reduce the amount of saturated fat it contains.  Additionally, there are technologies that would allow shmeat to be enriched with heart-healthy omega-3 fats, as well as other types of polyunsaturated fats.  In essence, shmeat could possibly help combat our growing obesity epidemic, as well as the associated illnesses such as diabetes and heart disease.  That’s *if* it can be produced in a way that is both affordable and widely available (more on that in a bit). 

In terms of health, switching to shmeat would improve more than our waistlines.  Because shmeat would be produced in a sterile environment, the incidence of E. coli and other bacterial and/or viral contamination would be next to nothing relative to current meat production methods.  On a more superficial level, shmeat technology would allow for the introduction of some very exotic meats into the mainstream.  Because this technology does not require an animal to be slaughtered (another good reason that supports shmeat productions) and it is not limited to the more common sources of meat, it would be entirely possible to make things like panda sausage and crocodile burgers.  But, of course, getting people to actually eat meat grown in a test-tube is another issue…

The limitations of shmeat

Now that I’ve just spent a few paragraphs singing shmeat’s praises, it is probably best that I fill you in on some of the major roadblocks associated with shmeat production.  According to scientists, there are two main concerns: the first is that shmeat production will not be subjected to the normal regulatory (homeostatic) mechanisms that naturally occur in animals (scientists are having trouble figuring out how to replicate these processes); and the second is that shmeat engineering technology has not evolved enough so that it can occur on an industrial scale.  Because of these issues and others, the cost of culturing shmeat in the laboratory is very high.  But, there has always got to be a starting point.  As the technologies advance, the cost-production ratios will decrease and, eventually, shmeat will find its way to the dining table – our dining table. 

Interestingly, the folks at PETA are all for shmeat and offered a one million dollar prize to the first group who could come up with the technology to make shmeat commercially available by June, 2012.  Obviously, that did not happen, and the contest has been extended to January 2013 (this offer has been on the table since 2008).  But, the first tastes test for shmeat hamburgers is going down in October of this year. 

At the moment, the largest piece of shmeat to be created is about the size of a contact lens and my guess is that, barring unforeseen technological breakthroughs, this reward will go unclaimed for a long, long time.  But, many a miracle has been known to happen in about nine months time…   

A few final thoughts on shmeat

With the world population expected to hit 9 billion by 2050, which will be accompanied by a major increase in the need for the amount of food produced, perhaps shmeat technology will become one of the critical innovations required for our collective survival on this planet.  But, there is just one thing: the ick factor.  It is a little hard for me to weigh in on this issue because almost all meat seems gross to me (unless it is a pulled pork sandwich, lovingly made by my long-time pal and professional chef – Julie Hall).  While most of my peers have less of an aversion to meat, I can’t imagine that they would eagerly line up for a whopping serving of lab-grown shmeat. 

But, say scientists finally figure it out and shmeat production is scaled up for mass consumption – how will the agricultural sector react?  As of right now, the agricultural industry in the USA is worth over $70 billion, with a yearly beef consumptiontipping over the 26 million pound mark (of which 8.7% is exported).  Shmeat probably has definitely gotten the attention of cattle farmers (and other meat farmers/production companies) and, given the size of this industry, I wonder how much muscle will be used to block shmeat from becoming a household phenomenon.

Over all, I think that shmeat is a revolutionary idea as it could have a significant impact on humanity.  However, there are many complex questions that need to be both asked andanswered.  As excited as I am at the thought of not having to kill an animal to eat a steak, I still remain skeptical (though this sentiment may not have been fully present for the majority of this post).  Will shmeat be produced in such a way that it will be indistinguishable from traditional meat?  Additionally, will shmeat live up to all of these expectations?  I am going to try and keep a positive outlook with this one.  Perhaps the next time I actually step foot in a kitchen to prepare a meal, I’ll follow Randy’s lead by making a shmeatloaf, served alongside a heaping side of mashed potatoes.  Now that’s some pretty cool kitchen science.

And now, an oldie but a goodie (let it be known that I am in love with Stephen Colbert):

The Colbert Report Mon – Thurs 11:30pm / 10:30c
World of Nahlej – Shmeat
www.colbertnation.com
Colbert Report Full Episodes Political Humor & Satire Blog Video Archive

For more information:
The Brian Lehrer Show, Shmeat: It’s whats for dinner