Towards better drug development, fewer side effects?

You may have had the experience: A medication you and a friend both take causes terrible side effects in you, but your friend experiences none. (The running joke in our house is, if a drug has a side-effect, we’ve had it.) How does that happen, and why would a drug that’s meant to, say, stabilize insulin levels, produce terrible gastrointestinal side effects, too? A combination of techy-tech scientific approaches might help answer those questions for you — and lead to some solutions.

It’s no secret I love lab technology. I’m a technophile. A geek. I call my web site “Biotechnically Speaking.” So when I saw this paper in the September issue of Nature Biotechnology, well, I just had to write about it.

The paper is entitled, “Multiplexed mass cytometry profiling of cellular states perturbed by small-molecule regulators.” If you read that and your eyes glazed over, don’t worry –- the article is way more interesting than its title. 

Those trees on the right are called SPADE trees. They map cellular responses to different  stimuli in a collection of human blood cells. Credit: (c) 2012 Nature America [Nat Biotechnol, 30:858--67, 2012]
Here’s the basic idea: The current methods drug developers use to screen potential drug compounds –- typically a blend of high-throughput imaging and biochemical assays – aren’t perfect. If they were, drugs wouldn’t fail late in development. Stanford immunologist Garry Nolan and his team, led by postdoc Bernd Bodenmiller (who now runs his own lab in Zurich), figured part of that problem stems from the fact that most early drug testing is done on immortalized cell lines, rather than “normal” human cells. Furthermore, the tests that are run on those cells aren’t as comprehensive as they could be, meaning potential collateral effects of the compounds might be missed. Nolan wanted to show that flow cytometry, a cell-analysis technique frequently used in immunology labs, can help reduce that failure rate by measuring drug impacts more holistically. 


Nolan is a flow cytometry master. As he told me in 2010, he’s been using the technique for more than three decades, and even used a machine now housed in the Smithsonian.


In flow cytometry, researchers treat cells with reagents called antibodies, which are immune system proteins that recognize and bind to specific proteins on cell surfaces. Each type of cell has a unique collection of these proteins, and by studying those collections, it is possible to differentiate and count the different populations.


Suppose researchers wanted to know how many T cells of a specific type were present in a patient’s blood. They might treat those cells with antibodies that recognize a protein known as CD3 to pick those out. By adding additional antibodies, they can then select different T-cell subpopulations, such as CD4-positive helper T cells and CD8-positive cytotoxic T cells, both of which help you mount immune responses.


Cells of the immune system
Source: http://stemcells.nih.gov/info/scireport/chapter6.asp
In a basic flow cytometry experiment, each antibody is labeled with a unique fluorescent dye –- the antibody targeting CD3 might be red, say, and the CD4 antibody, green. The cells stream past a laser, one by one. The laser (or lasers –- there can be as many as seven) excites the dye molecules decorating the cell surface, causing them to fluoresce. Detectors capture that light and give a count of how many total cells were measured and the types of cells. The result is a kind of catalog of the cell population. For immune cells, for example, that could be the number of T cells, B cells (which, among other things, help you “remember” previous invaders), and macrophages (the big cells that chomp up invaders and infected cells). By comparing the cellular catalogs that result under different conditions, researchers gain insight into development, disease, and the impact of drugs, among other things.


But here’s the problem: Fluorescent dyes aren’t lasers, producing light of exactly one particular color. They absorb and emit light over a range of colors, called a spectrum. And those spectra can overlap, such that when a researcher thinks she’s counting CD4 T cells, she may actually be counting some macrophages. That overlap leads to all sorts of experimental optimization issues. An exceptionally talented flow cytometrist can assemble panels of perhaps 12 or so dyes, but it might take months to get everything just right.


That’s where the mass cytometry comes in. Commercialized by DVS Sciences, mass cytometry is essentially the love-chid of flow cytometry and mass spectrometry, combining the one-cell-at-a-time analysis of the former with the atomic precision of the latter. Mass spectrometry identifies molecules based on the ratio of their mass to their charge. In DVS’ CyTOF mass cytometer, a flowing stream of cells is analyzed not by shining a laser on them, but by nuking them in superhot plasma. The nuking reduces the cell to its atomic components, which the CyTOF then measures.

Specifically, the CyTOF looks for heavy atoms called lanthanides, elements found in the first of the two bottom rows of the periodic table, like gadolinium, neodymium, and europium. These elements never naturally occur in biological systems and so make useful cellular labels. More to the point, the mass spectrometer is specific enough that these signals basically don’t overlap. The instrument will never confuse gadolinium for neodymium, for instance. Researchers simply tag their antibodies with lanthanides rather than fluorophores, and voila! Instant antibody panel, no (or little) optimization required.

Periodic Table of Cupcakes, with lanthanides in hot pink frosting.
Source: http://www.buzzfeed.com/jpmoore/the-periodic-table-of-cupcakes
Now back to the paper. Nolan (who sits on DVS Sciences’ Scientific Advisory Board) and Bodenmiller wanted to see if mass cytometry could provide the sort of high-density, high-throughput cellular profiling that is required for drug development. The team took blood cells from eight donors, treated them with more than two dozen different drugs over a range of concentrations, added a dozen stimuli to which blood cells can be exposed in the body, and essentially asked, for each of the pathways we want to study, in each kind of cell in these patients’ blood, what did the drug do?


To figure that out, they used a panel of 31 lanthanides –- 10 to sort out the cell types they were looking at in each sample, 14 to monitor cellular signaling pathways, and 7 to identify each sample.


I love that last part, about identifying the samples. The numbers in this experiment are kind of staggering: 12 stimuli x 8 doses x 14 cell types x 14 intracellular markers per drug, times 27 drugs, is more than half-a-million pieces of data. To make life easier on themselves, the researchers pooled samples 96 at a time in individual tubes, adding a “barcode” to uniquely identify each one. That barcode (called a “mass-tag cellular barcode,” or MCB) is essentially a 7-bit binary number made of lanthanides rather than ones and zeroes: one sample would have none of the 7 reserved markers (0000000); one sample would have one marker (0000001); another would have another (0000010); and so on. Seven lanthanides produce 128 possible combinations, so it’s no sweat to pool 96. They simply mix those samples in a single tube and let the computer sort everything out later.


This graphic summarizes a boatload of data on cell signaling pathways impacted by different drugs.
Credit: (c) 2012 Nature America [Nat Biotechnol, 30:858--67, 2012]
When all was said and done, the team was able to draw some conclusions about drug specificity, person-to-person variation, cell signaling, and more. Basically, and not surprisingly, some of the drugs they looked at are less specific than originally thought -– that is, they affect their intended targets, but other pathways as well. That goes a long way towards explaining side effects. But more to the point, they proved that their approach may be used to drive drug-screening experiments.


And I get to write about it. 

For Dad: A guide on strokes, including a glossary of terms

A scanning electron micrograph of a blood clot.  Image credit: Steve Gschmeissner/Science Photo Library (http://www.sciencephoto.com/media/203271/enlarge#) 


On Monday January 1st, I overheard my dad telling my mom how his left arm was numb and that he had no strength in his left hand.  I immediately ran into the medicine cabinet, grabbed two aspirin, practically shoved them down my dad’s throat, and told him to get his coat.  He was going to the ER. 

As it turns out, my dad was having a stroke, which is basically the cessation of blood flow to an area in the brain.  Luckily, my dad only suffered a very mild stroke, and after several days of monitoring and a battery of tests, he was released from the hospital. 

While we are all relieved that he dodged what could have been a fatal bullet, I came to realize that there was only a superficial understanding of what was actually happening.  So, to help demystify the process for my dad (and anyone else in this situation), I’ve decided to write a mini-guide on strokes.  Below you will find some handy information about strokes, including what they are, as well as a glossary of relevant terms.   

Why we need blood flow in the brain

Before I get into what happens to the brain when a stroke occurs, it is important to first understand why unrestricted blood flow in blood vessels in the brain is important.  The brain is a type of tissue, and like all tissues in our body, it needs a constant access to nutrients and oxygen.  Furthermore, tissues produce waste, and this waste needs to be removed.

The human cardiovascular system. Image Credit: Wikipedia.
Evolution’s solution to this problem is the development of a vast network of blood vessels existing within our tissues.  For instance, take a good look at your very own eyeballs.  Especially when we are tired, we can see tiny blood vessels called capillaries, which help to deliver key nutrients and oxygen, keeping our organs of sight healthy and happy.  Now consider that this type of blood vessel network exists in all tissues in our bodies (because it does).  Depending on the needs of the tissue, these vessels vary in size and number.  Sometimes the blood vessels are large, like the aorta, and sometimes they are super tiny, like the capillaries in our eyes.  However, all serve the same function: to make sure that cells can breath, eat, and get rid of waste.

When blood is prevented from traveling to a specific area within a tissue, the cells in that area will not get enough fuel and oxygen and will begin to die.  For instance, the restriction of blood flow to the heart leads to the death of heart tissue, causing a heart attack.  Similarly, the interruption of normal blood flow within the brain causes the affected cells in the brain to essentially starve, suffocate, and die, resulting in a stroke.  The medical term for a lack of oxygen delivery to tissues due to a restriction in blood flow is ischemia.  In general, the heart, brain, and the kidneys are the most sensitive to ischemic events, which, when occurring in these organs, can be fatal.      

So, what exactly is a stroke?

Some strokes can be categorized as being ischemic.  As mentioned above, an ischemic stroke occurs when blood flow (and the associated oxygen supply) is restricted in an area within the brain, leading to tissue death.  A major cause of ischemic strokes is a progressive disease called atherosclerosis, which can be translated to mean “the hardening of the arteries.” 

Severe atherosclerosis of the aorta.
Image Credit: Wikipedia.
Affecting the entire cardiovascular system, atherosclerosis is the result of cholesterol build-up inside of our blood vessels, causing their openings to become narrower.  These cholesterol plaques can eventually burst, leading to the formation of a blood clot.  Ischemic strokes occur as a result of a blood clot, medically known as a thrombus, that blocks the flow of blood to the brain, a phenomenon often related to complications from atherosclerosis.  A ruptured cholesterol plaque and resulting blood clot can occur in the brain, or it can occur elsewhere in the body, such as in the carotid arteries, and then travel to the brain.  Either way, the blood clot will block blood flow and oxygen delivery to sensitive brain tissue and cause a stroke.           

Strokes that result from the bursting of a blood vessel in the brain can be categorized as being hemorrhagic.  In this situation, there may be a pre-existing condition rendering the blood vessels in the brain defective, causing them to become weak and more susceptible to bursting.  More often than not, a hemorrhagic stroke is the result of high blood pressure, which puts an awful lot of stress on the blood vessels.  Hemorrhagic strokes are less common than ischemic strokes, but still just as serious. 

How do you know if you’ve had a stroke?

The symptoms of a stroke can vary depending on which part of the brain is affected and can develop quite suddenly.  It is common to experience a moderate to severe headache, especially if you are hemorrhaging (bleeding) in the brain.  Other symptoms can include dizziness, a change in senses (hearing, seeing, tasting), muscle tingling and/or weakness, trouble communicating, and/or memory loss.  If you are experiencing any of these warning signs, it is important to get to the hospital right away.  This is especially important if the stroke is being caused by a blood clot since clot-busting medicationsare only effective within the first few hours hours of clot formation. 

Once in the hospital, the caregiver will likely give anyone suspected of having a stroke a CT scan.  From this test, doctors will be able to determine if you had a stroke, what type of stroke you had (ischemic versus hemorrhagic), or if there is some other issue.  However, as was the case with my dad, a CT scan may not show evidence for a stroke.  This issue can arise as a result of timing (test performed before brain injury set in) or size of affected area (too small to see).  When not in an emergency situation, doctors may also or instead choose to prescribe an MRItest to look for evidence of a stroke.    

If a stroke has been confirmed, the next steps will be to try and figure out the underlying cause.  For ischemic strokes, it is important to find out if there is a blood clot and where it originated.  Because my dad had an ischemic stroke, he had to undergo a series of tests that searched for a blood clot in his carotid arteries though ultrasound, as well as in the heart, using both an electrocardiogram(EKG) and an echocardiogram(ultrasound of the heart).  The patient might also be asked to wear a Holter Monitor, which is a device worn for at least 24 hours and can detect potential heart abnormalities that may not be obvious from short-term observations, like those obtained via an EKG.  If a stroke is due to a hemorrhagic event, an angiogramwould be performed to try an pinpoint the compromised blood vessel.  

A stroke you did have.  Now what?

Once a stroke has been confirmed and categorized, the patient will most likely be transferred to the stroke unit of the hospital for both treatment and further observation.  If a clot has been detected, a patient will receive clot-busting medications (assuming this detection occurs within several hours of clot formation).  Alternatively, a clot can be mechanically removed with surgery (animation of clot removal, also known as a thrombectomy).  Patients might also be given blood-thinning medications to either ensure that clots do not increase in size or to prevent new clots from forming.   As for secondary prevention, meaning preventing another stroke from happening, patients might be given blood pressure and cholesterol lowering medications. 

If a disability arises due to stroke, a patient might need to undergo rehabilitation.  The type and duration of stroke rehabilitation is dependent on the area of brain that was affected, as well as the severity of the injury.  

Major risk factors and predictors of stroke

There are many situations that could predispose one to having a stroke, and many of these conditions are treatable.  The absolute greatest predictor of a stroke is blood pressure.  High blood pressure, also known as hypertension, will significantly raise your risk of having a stroke.   Other modifiable stroke risk factors include blood cholesterol levels, smoking, type 2 diabetes, diet, alcohol/drug use, and a sedentary life style.  However, there are also risk factors that you cannot change including family history of stroke, age, race, and gender.  But that shouldn’t stop one from practicing a healthy lifestyle!

In conclusion, strokes are no joke.  I am glad that my dad is still here (yes, dad, if you are reading this, we are in fact friends) and that he escaped with relatively no real consequences.  Let’s just not do this again, ok?  

Stroke Glossary

Anti-coagulants:These are medications that help to reduce the incidence of blood clotting.  The repertoire includes aspirin, Plavix, Warfarin, and Coumadin.  Also called blood thinners.
Atherosclerosis:Literally translated as “hardening of the arteries,” this condition is hallmarked by the build-up of cholesterol inside of blood vessels.  Atherosclerosis can lead to many complications including heart disease and stroke.

Atherosclerotic Plaque: The build of fatty materials, cholesterol, various cell types, and calcium.

Cardiovascular System: The network of blood vessels and heart that works to distribute blood throughout the body. 

Carotid Arteries: Arteries that carry blood away from the heart toward the head, neck, and brain.

CT Scan: Cross sectional pictures of the brain using X-rays.

Echocardiogram:An ultrasound of the heart.  In stroke vicitms, electrocardiography is used to detect the presence of a blood clot in the heart.

Electrocardiogram (EKG or ECG): The measurement of the electrical activity of the heart.  It is performed by attaching electrodes to a patient at numerous locations on the body, which function to measure electrical output of the heart.

Embolic Stroke: A type of ischemic stroke, an embolic stroke occurs when a blood clot forms (usually in the heart) and then travels to the brain, blocking blood flow and oxygen delivery to brain tissue.

Hemorrhagic Stroke: A type of stroke that results form the bursting of a blood vessel in the brain.

Hypertension: High blood pressure, defined as having 140/90 mmHg or above.

Ischemic Stroke: The restriction of blood flow to an area within the brain.

Magnetic Resonance Imaging (MRI): An imaging technique employing a magnetic field that can contrast different soft tissues in the body.

Thrombolytic Medications: Medications that are approved to dissolve blood clots.  Also called “clot-busting” medications.

Thrombus:Blood clot.

Biology Xplainer: Evolution and how it happens

Evolution: a population changes over time
First of all, in the context of science, you should never speak of evolution as a “theory.” There is no theory about whether or not evolution happens. It is a fact.

Scientists have, however, developed tested theories about how evolution happens. Although several proposed and tested processes or mechanisms exist, the most prominent and most studied, talked about, and debated, is Charles Darwin’s idea that the choices of nature guide these changes. The fame and importance of his idea, natural selection, has eclipsed the very real existence of other ways that populations can change over time.

Evolution in the biological sense does not occur in individuals, and the kind of evolution we’re talking about here isn’t about life’s origins. Evolution must happen at least at the populationlevel. In other words, it takes place in a group of existing organisms, members of the same species, often in a defined geographical area.

We never speak of individuals evolving in the biological sense. The population, a group of individuals of the same species, is the smallest unit of life that evolves.

To get to the bottom of what happens when a population changes over time, we must examine what’s happening to the gene combinations of the individuals in that population. The most precise way to talk about evolution in the biological sense is to define it as “a change in the allele frequency of a population over time.” A gene, which contains the code for a protein, can occur in different forms, or alleles. These different versions can mean that the trait associated with that protein can differ among individuals. Thanks to mutations, a gene for a trait can exist in a population in these different forms. It’s like having slightly different recipes for making the same cake, each producing a different version of the cake, except in this case, the “cake” is a protein.
Natural selection: One way evolution happens

Charles Darwin, a smart, thoughtful,
observant man. Via Wikimedia.
Charles Darwin, who didn’t know anything about alleles or even genes (so now you know more than he did on that score), understood from his work and observations that nature makes certain choices, and that often, what nature chooses in specific individuals turns up again in the individuals’ offspring. He realized that these characteristics that nature was choosing must pass to some offspring. This notion of heredity–that a feature encoded in the genes can be transmitted to your children–is inherent now in the theory of natural selection and a natural one for most people to accept. In science, an observable or measurable feature or characteristic is called a phenotype, and the genes that are the code for it are called its genotype. The color of my eyes (brown) is a phenotype, and the alleles of the eye color genes I have are the genotype.

What is nature selecting any individual in a population to do? In the theory of natural selection, nature chooses individuals that fit best into the current environment to pass along their “good-fit” genes, either through reproduction or indirectly through supporting the reproducer. Nature chooses organisms to survive and pass along those good-fit genes, so they have greater fitness.

Fitness is an evolutionary concept related to an organism’s reproductive success, either directly (as a parent) or indirectly (say, as an aunt or cousin). It is measured technically based on the proportion of an individual’s alleles that are represented in the next generation. When we talk about “fitness” and “the fittest,” remember that fittest does not mean strong. It relates more to a literal fit, like a square peg in a square hole, or a red dot against a red background. It doesn’t matter if the peg or dot is strong, just whether or not it fits its environment.

One final consideration before we move onto a synthesis of these ideas about differences, heredity, and reproduction: What would happen if the population were uniformly the same genetically for a trait? Well, when the environment changed, nature would have no choice to make. Without a choice, natural selection cannot happen–there is nothing to select. And the choice has to exist already; it does not typically happen in response to a need that the environment dictates. Usually, the ultimate origin for genetic variation–which underlies this choice–is mutation, or a change in a DNA coding sequence, the instructions for building a protein.

Don’t make the mistake of saying that an organism adapts by mutating in response to the environment. The mutations (the variation) must already be present for nature to make a choice based on the existing environment.

The Modern Synthesis

When Darwin presented his ideas about nature’s choices in an environmental context, he did so in a book with a very long title that begins, On the Origin of Species by Means of Natural Selection. Darwinknew his audience and laid out his argument clearly and well, with one stumbling block: How did all that heredity stuff actually work?

We now know–thanks to a meticulous scientist named Gregor Mendel (who also was a monk), our understanding of reproductive cell division, and modern genetics–exactly how it all works. Our traits–whether winners or losers in the fitness Olympics–have genes that determine them. These genes exist in us in pairs, and these pairs separate during division of our reproductive cells so that our offspring receive one member or the other of the pair. When this gene meets its coding partner from the other parent’s cell at fertilization, a new gene pair arises. This pairing may produce a similar outcome to one of the parents or be a novel combination that yields some new version of a trait. But this separating and pairing is how nature keeps things mixed up, setting up choices for selection.

Ernst Mayr, via PLoS.
With a growing understanding in the twentieth century of genetics and its role in evolution by means of natural selection, a great evolutionary biologist named Ernst Mayr (1904–2005) guided a meshing of genetics and evolution (along with other brilliant scientists including Theodosius Dobzhansky, George Simpson, and R.A. Fisher) into what is called The Modern Synthesis. This work encapsulates (dare I say, “synthesizes?”) concisely and beautifully the tenets of natural selection in the context of basic genetic inheritance. As part of his work, Mayr distilled Darwin’s ideas into a series of facts and inferences.

Facts and Inferences

Mayr’s distillation consists of five facts and three inferences, or conclusions, to draw from those facts.
  1. The first fact is that populations have the potential to increase exponentially. A quick look at any graph of human population growth illustrates that we, as a species, appear to be recognizing that potential. For a less successful example, consider the sea turtle. You may have seen the videos of the little turtle hatchlings valiantly flippering their way across the sand to the sea, cheered on by the conservation-minded humans who tended their nests. What the cameras usually don’t show is that the vast majority of these turtle offspring will not live to reproduce. The potential for exponential growth is there, based on number of offspring produced, but…it doesn’t happen.
  2. The second fact is that not all offspring reproduce, and many populations are stable in size. See “sea turtles,” above.
  3. The third fact is that resources are limited. And that leads us to our first conclusion, or inference: there is a struggle among organisms for nutrition, water, habitat, mates, parental attention…the various necessities of survival, depending on the species. The large number of offspring, most of which ultimately don’t survive to reproduce, must compete, or struggle, for the limited resources.
  4. Fact four is that individuals differ from one another. Look around. Even bacteria of the same strain have their differences, with some more able than others to with stand an antibiotic onslaught. Look at a crowd of people. They’re all different in hundreds of ways.
  5. Fact five is that much about us that is different lies in our genes–it is inheritable. Heredity undeniably exists and underlies a lot of our variation.
So we have five facts. Now for the three inferences:

  1. First, there is that struggle for survival, thanks to so many offspring and limited resources. See “sea turtle,” again.
  2. Second, different traits will be passed on differentially. Put another way: Winner traits are more likely to be passed on.
  3. And that takes us to our final conclusion: if enough of these “winner” traits are passed to enough individuals in a population, they will accumulate in that population and change its makeup. In other words, the population will change over time. It will be adapted to its environment. It will evolve.
Other mechanisms of evolution

A pigeon depicted in Charles Darwin’s
Variation of Animals and Plants
Under Domestication
, 1868. U.S.
public domain image, via Wikimedia.
When Darwin presented his idea of natural selection, he knew he had an audience to win over. He pointed out that people select features of organisms all the time and breed them to have those features. Darwin himself was fond of breeding pigeons with a great deal of pigeony variety. He noted that unless the pigeons already possessed traits for us to choose, we not would have that choice to make. But we do have choices. We make super-woolly sheep, dachshunds, and heirloom tomatoes simply by selecting from the variation nature provides and breeding those organisms to make more with those traits. We change the population over time.

Darwin called this process of human-directed evolution artificial selection. It made great sense for Darwinbecause it helped his reader get on board. If people could make these kinds of choices and wreak these kinds of changes, why not nature? In the process, Darwin also described this second way evolution can happen: human-directed evolution. We’re awash in it today, from our accidental development of antibiotic-resistant bacteria to wheat that resists devastating rust.

Genetic drift: fixed or lost

What about traits that have no effect either way, that are just there? One possible example in us might be attached earlobes. Good? Bad? Ugly? Well…they don’t appear to have much to do with whether or not we reproduce. They’re just there.

When a trait leaves nature so apparently disinterested, the alleles underlying it don’t experience selection. Instead, they drift in one direction or another, to extinction or 100 percent frequency. When an allele drifts to disappearance, we say that it is lost from the population. When it drifts to 100 percent presence, we say that it has become fixed. This process of evolution by genetic drift reduces variation in a population. Eventually, everyone will have it, or no one will.

Gene flow: genes in, genes out

Another way for a population to change over time is for it to experience a new infusion of genes or to lose a lot of them. This process of gene flow into or out of the population occurs because of migration in or out. Either of these events can change the allele frequency in a population, and that means that gene flow is another was that evolution can happen.

If gene flow happens between two different species, as can occur more with plants, then not only has the population changed significantly, but the new hybrid that results could be a whole new species. How do you think we get those tangelos?

Horizontal gene transfer

One interesting mechanism of evolution is horizontal gene transfer. When we think of passing along genes, we usually envision a vertical transfer through generations, from parent to offspring. But what if you could just walk up to a person and hand over some of your genes to them, genes that they incorporate into their own genome in each of their cells?

Of course, we don’t really do that–at least, not much, not yet–but microbes do this kind of thing all the time. Viruses that hijack a cell’s genome to reproduce can accidentally leave behind a bit of gene and voila! It’s a gene change. Bacteria can reach out to other living bacteria and transfer genetic material to them, possibly altering the traits of the population.

Evolutionary events

Sometimes, events happen at a large scale that have huge and rapid effects on the overall makeup of a population. These big changes mark some of the turning points in the evolutionary history of many species.

Cheetahs underwent a bottleneck that
has left them with little genetic variation.
Photo credit: Malene Thyssen, via
Wikimedia. 
Bottlenecks: losing variation

The word bottleneck pretty much says it all. Something happens over time to reduce the population so much that only a relatively few individuals survive. A bottleneck of this sort reduces the variability of a population. These events can be natural–such as those resulting from natural disasters–or they can be human induced, such as species bottlenecks we’ve induced through overhunting or habitat reduction.

Founder effect: starting small

Sometimes, the genes flow out of a population. This flow occurs when individuals leave and migrate elsewhere. They take their genes with them (obviously), and the populations they found will initially carry only those genes. Whatever they had with them genetically when they founded the population can affect that population. If there’s a gene that gives everyone a deadly reaction to barbiturates, that population will have a higher-than-usual frequency of people with that response, thanks to this founder effect.

Gene flow leads to two key points to make about evolution: First, a population carries only the genes it inherits and generally acquires new versions through mutation or gene flow. Second, that gene for lethal susceptibility to a drug would be meaningless in a natural selection context as long as the environment didn’t include exposure to that drug. The take-home message is this: What’s OK for one environment may or may not be fit for another environment. The nature of Nature is change, and Nature offers no guarantees.

Hardy-Weinberg: when evolution is absent

With all of these possible mechanisms for evolution under their belts, scientists needed a way to measure whether or not the frequency of specific alleles was changing over time in a given population or staying in equilibrium. Not an easy job. They found–“they” being G. H. Hardy and Wilhelm Weinberg–that the best way to measure this was to predict what the outcome would be if there were no change in allele frequencies. In other words, to predict that from generation to generation, allele frequencies would simply stay in equilibrium. If measurements over time yielded changing frequencies, then the implication would be that evolution has happened.

Defining “Not Evolving”

So what does it mean to not evolve? There are some basic scenarios that must exist for a population not to be experiencing a change in allele frequency, i.e., no evolution. If there is a change, then one of the items in the list below must be false:

·       Very large population (genetic drift can be a strong evolutionary mechanism in small populations)

·       No migrations (in other words, no gene flow)

·       No net mutations (no new variation introduced)

·       Random mating (directed mating is one way nature selects organisms)

·       No natural selection

In other words, a population that is not evolving is experiencing a complete absence of evolutionary processes. If any one of these is absent from a given population, then evolution is occurring and allele frequencies from generation to generation won’t be in equilibrium.

Convergent Evolution

Arguably the most famous of the
egg-laying monotremes, the improbable-
seeming platypus. License.
One of the best examples of the influences of environmental pressures is what happens in similar environments a world apart. Before the modern-day groupings of mammals arose, the continent of Australiaseparated from the rest of the world’s land masses, taking the proto-mammals that lived there with it. Over the ensuing millennia, these proto-mammals in Australiaevolved into the native species we see today on that continent, all marsupialsor monotremes.

Among mammals, there’s a division among those that lay eggs (monotremes), those that do most gestating in a pouch rather than a uterus (marsupials), and eutherians, which use a uterus for gestation (placental mammals).

Elsewhere in the world, most mammals developed from a common eutherian ancestor and, where marsupials still persisted, probably outcompeted them. In spite of this lengthy separation and different ancestry, however, for many of the examples of placental mammals, Australiahas a similar marsupial match. There’s the marsupial rodent that is like the rat. The marsupial wolf that is like the placental wolf. There’s even a marsupial anteater to match the placental one.

How did that happen an ocean apart with no gene flow? The answer is natural selection. The environment that made an organism with anteater characteristics best fit in South America was similar to the environment that made those characteristics a good fit in Australia. Ditto the rats, ditto the wolf.

When similar environments result in unrelated organisms having similar characteristics, we call that process convergent evolution. It’s natural selection in relatively unrelated species in parallel. In both regions, nature uses the same set of environmental features to mold organisms into the best fit.

By Emily Willingham, DXS managing editor

Note: This explanation of evolution and how it happens is not intended to be comprehensive or detailed or to include all possible mechanisms of evolution. It is simply an overview. In addition, it does not address epigenetics, which will be the subject of a different explainer.

Biology Explainer: The big 4 building blocks of life–carbohydrates, fats, proteins, and nucleic acids

The short version
  • The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.
  • Carbohydrates serve many purposes, from energy to structure to chemical communication, as monomers or polymers.
  • Lipids, which are hydrophobic, also have different purposes, including energy storage, structure, and signaling.
  • Proteins, made of amino acids in up to four structural levels, are involved in just about every process of life.                                                                                                      
  • The nucleic acids DNA and RNA consist of four nucleotide building blocks, and each has different purposes.
The longer version
Life is so diverse and unwieldy, it may surprise you to learn that we can break it down into four basic categories of molecules. Possibly even more implausible is the fact that two of these categories of large molecules themselves break down into a surprisingly small number of building blocks. The proteins that make up all of the living things on this planet and ensure their appropriate structure and smooth function consist of only 20 different kinds of building blocks. Nucleic acids, specifically DNA, are even more basic: only four different kinds of molecules provide the materials to build the countless different genetic codes that translate into all the different walking, swimming, crawling, oozing, and/or photosynthesizing organisms that populate the third rock from the Sun.

                                                  

Big Molecules with Small Building Blocks

The functional groups, assembled into building blocks on backbones of carbon atoms, can be bonded together to yield large molecules that we classify into four basic categories. These molecules, in many different permutations, are the basis for the diversity that we see among living things. They can consist of thousands of atoms, but only a handful of different kinds of atoms form them. It’s like building apartment buildings using a small selection of different materials: bricks, mortar, iron, glass, and wood. Arranged in different ways, these few materials can yield a huge variety of structures.

We encountered functional groups and the SPHONC in Chapter 3. These components form the four categories of molecules of life. These Big Four biological molecules are carbohydrates, lipids, proteins, and nucleic acids. They can have many roles, from giving an organism structure to being involved in one of the millions of processes of living. Let’s meet each category individually and discover the basic roles of each in the structure and function of life.
Carbohydrates

You have met carbohydrates before, whether you know it or not. We refer to them casually as “sugars,” molecules made of carbon, hydrogen, and oxygen. A sugar molecule has a carbon backbone, usually five or six carbons in the ones we’ll discuss here, but it can be as few as three. Sugar molecules can link together in pairs or in chains or branching “trees,” either for structure or energy storage.

When you look on a nutrition label, you’ll see reference to “sugars.” That term includes carbohydrates that provide energy, which we get from breaking the chemical bonds in a sugar called glucose. The “sugars” on a nutrition label also include those that give structure to a plant, which we call fiber. Both are important nutrients for people.

Sugars serve many purposes. They give crunch to the cell walls of a plant or the exoskeleton of a beetle and chemical energy to the marathon runner. When attached to other molecules, like proteins or fats, they aid in communication between cells. But before we get any further into their uses, let’s talk structure.

The sugars we encounter most in basic biology have their five or six carbons linked together in a ring. There’s no need to dive deep into organic chemistry, but there are a couple of essential things to know to interpret the standard representations of these molecules.

Check out the sugars depicted in the figure. The top-left molecule, glucose, has six carbons, which have been numbered. The sugar to its right is the same glucose, with all but one “C” removed. The other five carbons are still there but are inferred using the conventions of organic chemistry: Anywhere there is a corner, there’s a carbon unless otherwise indicated. It might be a good exercise for you to add in a “C” over each corner so that you gain a good understanding of this convention. You should end up adding in five carbon symbols; the sixth is already given because that is conventionally included when it occurs outside of the ring.

On the left is a glucose with all of its carbons indicated. They’re also numbered, which is important to understand now for information that comes later. On the right is the same molecule, glucose, without the carbons indicated (except for the sixth one). Wherever there is a corner, there is a carbon, unless otherwise indicated (as with the oxygen). On the bottom left is ribose, the sugar found in RNA. The sugar on the bottom right is deoxyribose. Note that at carbon 2 (*), the ribose and deoxyribose differ by a single oxygen.

The lower left sugar in the figure is a ribose. In this depiction, the carbons, except the one outside of the ring, have not been drawn in, and they are not numbered. This is the standard way sugars are presented in texts. Can you tell how many carbons there are in this sugar? Count the corners and don’t forget the one that’s already indicated!

If you said “five,” you are right. Ribose is a pentose (pent = five) and happens to be the sugar present in ribonucleic acid, or RNA. Think to yourself what the sugar might be in deoxyribonucleic acid, or DNA. If you thought, deoxyribose, you’d be right.

The fourth sugar given in the figure is a deoxyribose. In organic chemistry, it’s not enough to know that corners indicate carbons. Each carbon also has a specific number, which becomes important in discussions of nucleic acids. Luckily, we get to keep our carbon counting pretty simple in basic biology. To count carbons, you start with the carbon to the right of the non-carbon corner of the molecule. The deoxyribose or ribose always looks to me like a little cupcake with a cherry on top. The “cherry” is an oxygen. To the right of that oxygen, we start counting carbons, so that corner to the right of the “cherry” is the first carbon. Now, keep counting. Here’s a little test: What is hanging down from carbon 2 of the deoxyribose?

If you said a hydrogen (H), you are right! Now, compare the deoxyribose to the ribose. Do you see the difference in what hangs off of the carbon 2 of each sugar? You’ll see that the carbon 2 of ribose has an –OH, rather than an H. The reason the deoxyribose is called that is because the O on the second carbon of the ribose has been removed, leaving a “deoxyed” ribose. This tiny distinction between the sugars used in DNA and RNA is significant enough in biology that we use it to distinguish the two nucleic acids.

In fact, these subtle differences in sugars mean big differences for many biological molecules. Below, you’ll find a couple of ways that apparently small changes in a sugar molecule can mean big changes in what it does. These little changes make the difference between a delicious sugar cookie and the crunchy exoskeleton of a dung beetle.

Sugar and Fuel

A marathon runner keeps fuel on hand in the form of “carbs,” or sugars. These fuels provide the marathoner’s straining body with the energy it needs to keep the muscles pumping. When we take in sugar like this, it often comes in the form of glucose molecules attached together in a polymer called starch. We are especially equipped to start breaking off individual glucose molecules the minute we start chewing on a starch.

Double X Extra: A monomer is a building block (mono = one) and a polymer is a chain of monomers. With a few dozen monomers or building blocks, we get millions of different polymers. That may sound nutty until you think of the infinity of values that can be built using only the numbers 0 through 9 as building blocks or the intricate programming that is done using only a binary code of zeros and ones in different combinations.

Our bodies then can rapidly take the single molecules, or monomers, into cells and crack open the chemical bonds to transform the energy for use. The bonds of a sugar are packed with chemical energy that we capture to build a different kind of energy-containing molecule that our muscles access easily. Most species rely on this process of capturing energy from sugars and transforming it for specific purposes.

Polysaccharides: Fuel and Form

Plants use the Sun’s energy to make their own glucose, and starch is actually a plant’s way of storing up that sugar. Potatoes, for example, are quite good at packing away tons of glucose molecules and are known to dieticians as a “starchy” vegetable. The glucose molecules in starch are packed fairly closely together. A string of sugar molecules bonded together through dehydration synthesis, as they are in starch, is a polymer called a polysaccharide (poly = many; saccharide = sugar). When the monomers of the polysaccharide are released, as when our bodies break them up, the reaction that releases them is called hydrolysis.

Double X Extra: The specific reaction that hooks one monomer to another in a covalent bond is called dehydration synthesis because in making the bond–synthesizing the larger molecule–a molecule of water is removed (dehydration). The reverse is hydrolysis (hydro = water; lysis = breaking), which breaks the covalent bond by the addition of a molecule of water.

Although plants make their own glucose and animals acquire it by eating the plants, animals can also package away the glucose they eat for later use. Animals, including humans, store glucose in a polysaccharide called glycogen, which is more branched than starch. In us, we build this energy reserve primarily in the liver and access it when our glucose levels drop.

Whether starch or glycogen, the glucose molecules that are stored are bonded together so that all of the molecules are oriented the same way. If you view the sixth carbon of the glucose to be a “carbon flag,” you’ll see in the figure that all of the glucose molecules in starch are oriented with their carbon flags on the upper left.

The orientation of monomers of glucose in polysaccharides can make a big difference in the use of the polymer. The glucoses in the molecule on the top are all oriented “up” and form starch. The glucoses in the molecule on the bottom alternate orientation to form cellulose, which is quite different in its function from starch.

Storing up sugars for fuel and using them as fuel isn’t the end of the uses of sugar. In fact, sugars serve as structural molecules in a huge variety of organisms, including fungi, bacteria, plants, and insects.

The primary structural role of a sugar is as a component of the cell wall, giving the organism support against gravity. In plants, the familiar old glucose molecule serves as one building block of the plant cell wall, but with a catch: The molecules are oriented in an alternating up-down fashion. The resulting structural sugar is called cellulose.

That simple difference in orientation means the difference between a polysaccharide as fuel for us and a polysaccharide as structure. Insects take it step further with the polysaccharide that makes up their exoskeleton, or outer shell. Once again, the building block is glucose, arranged as it is in cellulose, in an alternating conformation. But in insects, each glucose has a little extra added on, a chemical group called an N-acetyl group. This addition of a single functional group alters the use of cellulose and turns it into a structural molecule that gives bugs that special crunchy sound when you accidentally…ahem…step on them.

These variations on the simple theme of a basic carbon-ring-as-building-block occur again and again in biological systems. In addition to serving roles in structure and as fuel, sugars also play a role in function. The attachment of subtly different sugar molecules to a protein or a lipid is one way cells communicate chemically with one another in refined, regulated interactions. It’s as though the cells talk with each other using a specialized, sugar-based vocabulary. Typically, cells display these sugary messages to the outside world, making them available to other cells that can recognize the molecular language.

Lipids: The Fatty Trifecta

Starch makes for good, accessible fuel, something that we immediately attack chemically and break up for quick energy. But fats are energy that we are supposed to bank away for a good long time and break out in times of deprivation. Like sugars, fats serve several purposes, including as a dense source of energy and as a universal structural component of cell membranes everywhere.

Fats: the Good, the Bad, the Neutral

Turn again to a nutrition label, and you’ll see a few references to fats, also known as lipids. (Fats are slightly less confusing that sugars in that they have only two names.) The label may break down fats into categories, including trans fats, saturated fats, unsaturated fats, and cholesterol. You may have learned that trans fats are “bad” and that there is good cholesterol and bad cholesterol, but what does it all mean?

Let’s start with what we mean when we say saturated fat. The question is, saturated with what? There is a specific kind of dietary fat call the triglyceride. As its name implies, it has a structural motif in which something is repeated three times. That something is a chain of carbons and hydrogens, hanging off in triplicate from a head made of glycerol, as the figure shows.  Those three carbon-hydrogen chains, or fatty acids, are the “tri” in a triglyceride. Chains like this can be many carbons long.

Double X Extra: We call a fatty acid a fatty acid because it’s got a carboxylic acid attached to a fatty tail. A triglyceride consists of three of these fatty acids attached to a molecule called glycerol. Our dietary fat primarily consists of these triglycerides.

Triglycerides come in several forms. You may recall that carbon can form several different kinds of bonds, including single bonds, as with hydrogen, and double bonds, as with itself. A chain of carbon and hydrogens can have every single available carbon bond taken by a hydrogen in single covalent bond. This scenario of hydrogen saturation yields a saturated fat. The fat is saturated to its fullest with every covalent bond taken by hydrogens single bonded to the carbons.

Saturated fats have predictable characteristics. They lie flat easily and stick to each other, meaning that at room temperature, they form a dense solid. You will realize this if you find a little bit of fat on you to pinch. Does it feel pretty solid? That’s because animal fat is saturated fat. The fat on a steak is also solid at room temperature, and in fact, it takes a pretty high heat to loosen it up enough to become liquid. Animals are not the only organisms that produce saturated fat–avocados and coconuts also are known for their saturated fat content.

The top graphic above depicts a triglyceride with the glycerol, acid, and three hydrocarbon tails. The tails of this saturated fat, with every possible hydrogen space occupied, lie comparatively flat on one another, and this kind of fat is solid at room temperature. The fat on the bottom, however, is unsaturated, with bends or kinks wherever two carbons have double bonded, booting a couple of hydrogens and making this fat unsaturated, or lacking some hydrogens. Because of the space between the bumps, this fat is probably not solid at room temperature, but liquid.

You can probably now guess what an unsaturated fat is–one that has one or more hydrogens missing. Instead of single bonding with hydrogens at every available space, two or more carbons in an unsaturated fat chain will form a double bond with carbon, leaving no space for a hydrogen. Because some carbons in the chain share two pairs of electrons, they physically draw closer to one another than they do in a single bond. This tighter bonding result in a “kink” in the fatty acid chain.

In a fat with these kinks, the three fatty acids don’t lie as densely packed with each other as they do in a saturated fat. The kinks leave spaces between them. Thus, unsaturated fats are less dense than saturated fats and often will be liquid at room temperature. A good example of a liquid unsaturated fat at room temperature is canola oil.

A few decades ago, food scientists discovered that unsaturated fats could be resaturated or hydrogenated to behave more like saturated fats and have a longer shelf life. The process of hydrogenation–adding in hydrogens–yields trans fat. This kind of processed fat is now frowned upon and is being removed from many foods because of its associations with adverse health effects. If you check a food label and it lists among the ingredients “partially hydrogenated” oils, that can mean that the food contains trans fat.

Double X Extra: A triglyceride can have up to three different fatty acids attached to it. Canola oil, for example, consists primarily of oleic acid, linoleic acid, and linolenic acid, all of which are unsaturated fatty acids with 18 carbons in their chains.

Why do we take in fat anyway? Fat is a necessary nutrient for everything from our nervous systems to our circulatory health. It also, under appropriate conditions, is an excellent way to store up densely packaged energy for the times when stores are running low. We really can’t live very well without it.

Phospholipids: An Abundant Fat

You may have heard that oil and water don’t mix, and indeed, it is something you can observe for yourself. Drop a pat of butter–pure saturated fat–into a bowl of water and watch it just sit there. Even if you try mixing it with a spoon, it will just sit there. Now, drop a spoon of salt into the water and stir it a bit. The salt seems to vanish. You’ve just illustrated the difference between a water-fearing (hydrophobic) and a water-loving (hydrophilic) substance.

Generally speaking, compounds that have an unequal sharing of electrons (like ions or anything with a covalent bond between oxygen and hydrogen or nitrogen and hydrogen) will be hydrophilic. The reason is that a charge or an unequal electron sharing gives the molecule polarity that allows it to interact with water through hydrogen bonds. A fat, however, consists largely of hydrogen and carbon in those long chains. Carbon and hydrogen have roughly equivalent electronegativities, and their electron-sharing relationship is relatively nonpolar. Fat, lacking in polarity, doesn’t interact with water. As the butter demonstrated, it just sits there.

There is one exception to that little maxim about fat and water, and that exception is the phospholipid. This lipid has a special structure that makes it just right for the job it does: forming the membranes of cells. A phospholipid consists of a polar phosphate head–P and O don’t share equally–and a couple of nonpolar hydrocarbon tails, as the figure shows. If you look at the figure, you’ll see that one of the two tails has a little kick in it, thanks to a double bond between the two carbons there.

Phospholipids form a double layer and are the major structural components of cell membranes. Their bend, or kick, in one of the hydrocarbon tails helps ensure fluidity of the cell membrane. The molecules are bipolar, with hydrophilic heads for interacting with the internal and external watery environments of the cell and hydrophobic tails that help cell membranes behave as general security guards.

The kick and the bipolar (hydrophobic and hydrophilic) nature of the phospholipid make it the perfect molecule for building a cell membrane. A cell needs a watery outside to survive. It also needs a watery inside to survive. Thus, it must face the inside and outside worlds with something that interacts well with water. But it also must protect itself against unwanted intruders, providing a barrier that keeps unwanted things out and keeps necessary molecules in.

Phospholipids achieve it all. They assemble into a double layer around a cell but orient to allow interaction with the watery external and internal environments. On the layer facing the inside of the cell, the phospholipids orient their polar, hydrophilic heads to the watery inner environment and their tails away from it. On the layer to the outside of the cell, they do the same.
As the figure shows, the result is a double layer of phospholipids with each layer facing a polar, hydrophilic head to the watery environments. The tails of each layer face one another. They form a hydrophobic, fatty moat around a cell that serves as a general gatekeeper, much in the way that your skin does for you. Charged particles cannot simply slip across this fatty moat because they can’t interact with it. And to keep the fat fluid, one tail of each phospholipid has that little kick, giving the cell membrane a fluid, liquidy flow and keeping it from being solid and unforgiving at temperatures in which cells thrive.

Steroids: Here to Pump You Up?

Our final molecule in the lipid fatty trifecta is cholesterol. As you may have heard, there are a few different kinds of cholesterol, some of which we consider to be “good” and some of which is “bad.” The good cholesterol, high-density lipoprotein, or HDL, in part helps us out because it removes the bad cholesterol, low-density lipoprotein or LDL, from our blood. The presence of LDL is associated with inflammation of the lining of the blood vessels, which can lead to a variety of health problems.

But cholesterol has some other reasons for existing. One of its roles is in the maintenance of cell membrane fluidity. Cholesterol is inserted throughout the lipid bilayer and serves as a block to the fatty tails that might otherwise stick together and become a bit too solid.

Cholesterol’s other starring role as a lipid is as the starting molecule for a class of hormones we called steroids or steroid hormones. With a few snips here and additions there, cholesterol can be changed into the steroid hormones progesterone, testosterone, or estrogen. These molecules look quite similar, but they play very different roles in organisms. Testosterone, for example, generally masculinizes vertebrates (animals with backbones), while progesterone and estrogen play a role in regulating the ovulatory cycle.

Double X Extra: A hormone is a blood-borne signaling molecule. It can be lipid based, like testosterone, or short protein, like insulin.

Proteins

As you progress through learning biology, one thing will become more and more clear: Most cells function primarily as protein factories. It may surprise you to learn that proteins, which we often talk about in terms of food intake, are the fundamental molecule of many of life’s processes. Enzymes, for example, form a single broad category of proteins, but there are millions of them, each one governing a small step in the molecular pathways that are required for living.

Levels of Structure

Amino acids are the building blocks of proteins. A few amino acids strung together is called a peptide, while many many peptides linked together form a polypeptide. When many amino acids strung together interact with each other to form a properly folded molecule, we call that molecule a protein.

For a string of amino acids to ultimately fold up into an active protein, they must first be assembled in the correct order. The code for their assembly lies in the DNA, but once that code has been read and the amino acid chain built, we call that simple, unfolded chain the primary structure of the protein.

This chain can consist of hundreds of amino acids that interact all along the sequence. Some amino acids are hydrophobic and some are hydrophilic. In this context, like interacts best with like, so the hydrophobic amino acids will interact with one another, and the hydrophilic amino acids will interact together. As these contacts occur along the string of molecules, different conformations will arise in different parts of the chain. We call these different conformations along the amino acid chain the protein’s secondary structure.

Once those interactions have occurred, the protein can fold into its final, or tertiary structure and be ready to serve as an active participant in cellular processes. To achieve the tertiary structure, the amino acid chain’s secondary interactions must usually be ongoing, and the pH, temperature, and salt balance must be just right to facilitate the folding. This tertiary folding takes place through interactions of the secondary structures along the different parts of the amino acid chain.

The final product is a properly folded protein. If we could see it with the naked eye, it might look a lot like a wadded up string of pearls, but that “wadded up” look is misleading. Protein folding is a carefully regulated process that is determined at its core by the amino acids in the chain: their hydrophobicity and hydrophilicity and how they interact together.

In many instances, however, a complete protein consists of more than one amino acid chain, and the complete protein has two or more interacting strings of amino acids. A good example is hemoglobin in red blood cells. Its job is to grab oxygen and deliver it to the body’s tissues. A complete hemoglobin protein consists of four separate amino acid chains all properly folded into their tertiary structures and interacting as a single unit. In cases like this involving two or more interacting amino acid chains, we say that the final protein has a quaternary structure. Some proteins can consist of as many as a dozen interacting chains, behaving as a single protein unit.

A Plethora of Purposes

What does a protein do? Let us count the ways. Really, that’s almost impossible because proteins do just about everything. Some of them tag things. Some of them destroy things. Some of them protect. Some mark cells as “self.” Some serve as structural materials, while others are highways or motors. They aid in communication, they operate as signaling molecules, they transfer molecules and cut them up, they interact with each other in complex, interrelated pathways to build things up and break things down. They regulate genes and package DNA, and they regulate and package each other.

As described above, proteins are the final folded arrangement of a string of amino acids. One way we obtain these building blocks for the millions of proteins our bodies make is through our diet. You may hear about foods that are high in protein or people eating high-protein diets to build muscle. When we take in those proteins, we can break them apart and use the amino acids that make them up to build proteins of our own.

Nucleic Acids

How does a cell know which proteins to make? It has a code for building them, one that is especially guarded in a cellular vault in our cells called the nucleus. This code is deoxyribonucleic acid, or DNA. The cell makes a copy of this code and send it out to specialized structures that read it and build proteins based on what they read. As with any code, a typo–a mutation–can result in a message that doesn’t make as much sense. When the code gets changed, sometimes, the protein that the cell builds using that code will be changed, too.

Biohazard!The names associated with nucleic acids can be confusing because they all start with nucle-. It may seem obvious or easy now, but a brain freeze on a test could mix you up. You need to fix in your mind that the shorter term (10 letters, four syllables), nucleotide, refers to the smaller molecule, the three-part building block. The longer term (12 characters, including the space, and five syllables), nucleic acid, which is inherent in the names DNA and RNA, designates the big, long molecule.

DNA vs. RNA: A Matter of Structure

DNA and its nucleic acid cousin, ribonucleic acid, or RNA, are both made of the same kinds of building blocks. These building blocks are called nucleotides. Each nucleotide consists of three parts: a sugar (ribose for RNA and deoxyribose for DNA), a phosphate, and a nitrogenous base. In DNA, every nucleotide has identical sugars and phosphates, and in RNA, the sugar and phosphate are also the same for every nucleotide.

So what’s different? The nitrogenous bases. DNA has a set of four to use as its coding alphabet. These are the purines, adenine and guanine, and the pyrimidines, thymine and cytosine. The nucleotides are abbreviated by their initial letters as A, G, T, and C. From variations in the arrangement and number of these four molecules, all of the diversity of life arises. Just four different types of the nucleotide building blocks, and we have you, bacteria, wombats, and blue whales.

RNA is also basic at its core, consisting of only four different nucleotides. In fact, it uses three of the same nitrogenous bases as DNA–A, G, and C–but it substitutes a base called uracil (U) where DNA uses thymine. Uracil is a pyrimidine.

DNA vs. RNA: Function Wars

An interesting thing about the nitrogenous bases of the nucleotides is that they pair with each other, using hydrogen bonds, in a predictable way. An adenine will almost always bond with a thymine in DNA or a uracil in RNA, and cytosine and guanine will almost always bond with each other. This pairing capacity allows the cell to use a sequence of DNA and build either a new DNA sequence, using the old one as a template, or build an RNA sequence to make a copy of the DNA.

These two different uses of A-T/U and C-G base pairing serve two different purposes. DNA is copied into DNA usually when a cell is preparing to divide and needs two complete sets of DNA for the new cells. DNA is copied into RNA when the cell needs to send the code out of the vault so proteins can be built. The DNA stays safely where it belongs.

RNA is really a nucleic acid jack-of-all-trades. It not only serves as the copy of the DNA but also is the main component of the two types of cellular workers that read that copy and build proteins from it. At one point in this process, the three types of RNA come together in protein assembly to make sure the job is done right.


 By Emily Willingham, DXS managing editor 
This material originally appeared in similar form in Emily Willingham’s Complete Idiot’s Guide to College Biology

Anorexia nervosa, neurobiology, and family-based treatment

Via Wikimedia Commons
Photo credit: Sandra Mann
By Harriet Brown, DXS contributor

Back in 1978, psychoanalyst Hilde Bruch published the first popular book on anorexia nervosa. In The Golden Cage, she described anorexia as a psychological illness caused by environmental factors: sexual abuse, over-controlling parents, fears about growing up, and/or other psychodynamic factors. Bruch believed young patients needed to be separated from their families (a concept that became known as a “parentectomy”) so therapists could help them work through the root issues underlying the illness. Then, and only then, patients would choose to resume eating. If they were still alive.

Bruch’s observations dictated eating-disorders treatments for decades, treatments that led to spectacularly ineffective results. Only about 35% of people with anorexia recovered; another 20% died, of starvation or suicide; and the rest lived with some level of chronic illness for the rest of their lives.

Not a great track record, overall, and especially devastating for women, who suffer from anorexia at a rate of 10 times that of men. Luckily, we know a lot more about anorexia and other eating disorders now than we did in 1978.

“It’s Not About the Food”

In Bruch’s day, anorexia wasn’t the only illness attributed to faulty parenting and/or trauma. Therapists saw depression, anxiety, schizophrenia, eating disorders, and homosexuality (long considered a psychiatric “illness”) as ailments of the mind alone. Thanks to the rising field of behavioral neuroscience, we’ve begun to untangle the ways brain circuitry, neural architecture, and other biological processes contribute to these disorders. Most experts now agree that depression and anxiety can be caused by, say, neurotransmitter imbalances as much as unresolved emotional conflicts, and treat them accordingly. But the field of eating-disorders treatment has been slow to jump on the neurobiology bandwagon. When my daughter was diagnosed with anorexia in 2005, for instance, we were told to find her a therapist and try to get our daughter to eat “without being the food police,” because, as one therapist informed us, “It’s not about the food.”

Actually, it is about the food. Especially when you’re starving.

Ancel Keys’ 1950 Semi-Starvation Study tracked the effects of starvation and subsequent re-feeding on 36 healthy young men, all conscientious objectors who volunteered for the experiment. Keys was drawn to the subject during World War II, when millions in war-torn Europe – especially those in concentration camps – starved for years. One of Keys’ most interesting findings was that starvation itself, followed by re-feeding after a period of prolonged starvation, produced both physical and psychological symptoms, including depression, preoccupation with weight and body image, anxiety, and obsessions with food, eating, and cooking—all symptoms we now associate with anorexia. Re-feeding the volunteers eventuallyreversed most of the symptoms. However, this approach proved to be difficult on a psychological level, and in some ways more difficult than the starvation period. These results were a clear illustration of just how profound the effects of months of starvation were on the body and mind.

Alas, Keys’ findings were pretty much ignored by the field of eating-disorders treatment for 40-some years, until new technologies like functional magnetic resonance imaging (fMRI) and research gave new context to his work. We now know there is no single root cause for eating disorders. They’re what researchers call multi-factorial, triggered by a perfect storm of factors that probably differs for each person who develops an eating disorder. “Personality characteristics, the environment you live in, your genetic makeup—it’s like a cake recipe,” says Daniel le Grange, Ph.D., director of the Eating Disorders Program at the University of Chicago. “All the ingredients have to be there for that person to develop anorexia.”

One of those ingredients is genetics. Twenty years ago, the Price Foundation sponsored a project that collected DNA samples from thousands of people with eating disorders, their families, and control participants. That data, along with information from the 2006 Swedish Twin Study, suggests that anorexia is highly heritable. “Genes play a substantial role in liability to this illness,” says Cindy Bulik, Ph.D., a professor of psychiatry and director of the University of North Carolina’s Eating Disorders Program. And while no one has yet found a specific anorexia gene, researchers are focusing on an area of chromosome 1 that shows important gene linkages.

Certain personality traits associated with anorexia are probably heritable as well. “Anxiety, inhibition, obsessionality, and perfectionism seem to be present in families of people with an eating disorder,” explains Walter Kaye, M.D., who directs the Eating Disorders Treatment and Research Program at the University of California-San Diego. Another ingredient is neurobiology—literally, the way your brain is structured and how it works. Dr. Kaye’s team at UCSD uses fMRI technology to map blood flow in people’s brains as they think of or perform a task. In one study, Kaye and his colleagues looked at the brains of people with anorexia, people recovered from anorexia, and people who’d never had an eating disorder as they played a gambling game. Participants were asked to guess a number and were rewarded for correct guesses with money or “punished” for incorrect or no guesses by losing money.

Participants in the control group responded to wins and losses by “living in the moment,” wrote researchers: “That is, they made a guess and then moved on to the next task.” But people with anorexia, as well as people who’d recovered from anorexia, showed greater blood flow to the dorsal caudate, an area of the brain that helps link actions and their outcomes, as well as differences in their brains’ dopamine pathways. “People with anorexia nervosa do not live in the moment,” concluded Kaye. “They tend to have exaggerated and obsessive worry about the consequences of their behaviors, looking for rules when there are none, and they are overly concerned about making mistakes.” This study was the first to show altered pathways in the brain even in those recovered from anorexia, suggesting that inherent differences in the brain’s architecture and signaling systems help trigger the illness in the first place.

Food Is Medicine

Some of the best news to come out of research on anorexia is a new therapy aimed at kids and teens. Family-based treatment (FBT), also known as the Maudsley approach, was developed at the Maudsley Hospital in London by Ivan Eisler and Christopher Dare, family therapists who watched nurses on the inpatient eating-disorders unit get patients to eat by sitting with them, talking to them, rubbing their backs, and supporting them. Eisler and Dare wondered how that kind of effective encouragement could be used outside the hospital.

Their observations led them to develop family-based treatment, or FBT, a three-phase treatment for teens and young adults that sidesteps the debate on etiology and focuses instead on recovery. “FBT is agnostic on cause,” says Dr. Le Grange. During phase one, families (usually parents) take charge of a child’s eating, with a goal of fully restoring weight (rather than get to the “90 percent of ideal body weight” many programs use as a benchmark). In phase two, families gradually transfer responsibility for eating back to the teen. Phase three addresses other problems or issues related to normal adolescent development, if there are any.

FBT is a pragmatic approach that recognizes that while people with anorexia are in the throes of acute malnourishment, they can’t choose to eat. And that represents one of the biggest shifts in thinking about eating disorders. The DSM-IV, the most recent “bible” of psychiatric treatment, lists as the first symptom of anorexia “a refusal to maintain body weight at or above a minimally normal weight for age and height.” That notion of refusal is key to how anorexia has been seen, and treated, in the past: as a refusal to eat or gain weight. An acting out. A choice. Which makes sense within the psychodynamic model of cause.

But it doesn’t jibe with the research, which suggests that anorexia is more of an inability to eat than a refusal. Forty-five years ago, Aryeh Routtenberg, then (and still) a professor of psychology at Northwestern University, discovered that when he gave rats only brief daily access to food but let them run as much as they wanted on wheels, they would gradually eat less and less, and run more and more. In fact, they would run without eating until they died, a paradigm Routtenberg called activity-based anorexia (ABA). Rats with ABA seemed to be in the grip of a profound physiological imbalance, one that overrode the normal biological imperatives of hunger and self-preservation. ABA in rats suggests that however it starts, once the cycle of restricting and/or compulsive exercising passes a certain threshold, it takes on a life of its own. Self-starvation is no longer (if it ever was) a choice, but a compulsion to the death.

That’s part of the thinking in FBT. Food is the best medicine for people with anorexia, but they can’t choose to eat. They need someone else to make that choice for them. Therapists don’t sit at the table with patients, but parents do. And parents love and know their children. Like the nurses at the Maudsley Hospital, they find ways to get kids to eat. In a sense, what parents do is outshout the anorexia “voice” many sufferers report hearing, a voice in their heads that tells them not to eat and berates them when they do. Parents take the responsibility for making the choice to eat away from the sufferer, who may insist she’s choosing not to eat but who, underneath the illness, is terrified and hungry.

The best aspect of FBT is that it works. Not for everyone, but for the majority of kids and teens. Several randomized controlled studies of FBT and “treatment as usual” (talk therapy without pressure to eat) show recovery rates of 80 to 90 percent with FBT—a huge improvement over previous recovery rates. A study at the University of Chicago is looking at adapting the treatment for young adults; early results are promising.

The most challenging aspect of FBT is that it’s hard to find. Relatively few therapists in the U.S. are trained in the approach. When our daughter got sick, my husband and I couldn’t find a local FBT therapist. So we cobbled together a team that included our pediatrician, a therapist, and lots of friends who supported our family through the grueling work of re-feeding our daughter. Today she’s a healthy college student with friends, a boyfriend, career goals, and a good relationship with us.

A few years ago, Dr. Le Grange and his research partner, Dr. James Lock of Stanford, created a training institute that certifies a handful of FBT therapists each year. (For a list of FBT providers, visit the Maudsley Parents website.) It’s a start. But therapists are notoriously slow to adopt new treatments, and FBT is no exception. Some therapists find FBT controversial because it upends the conventional view of eating disorders and treatments. Some cling to the psychodynamic view of eating disorders despite the lack of evidence. Still, many in the field have at least heard of FBT and Kaye’s neurobiological findings, even if they don’t believe in them yet.

Change comes slowly. But it comes.

* * *

Harriet Brown teaches magazine journalism at the S.I. Newhouse School of Public Communications in Syracuse, New York. Her latest book is Brave Girl Eating: A Family’s Struggle with Anorexia (William Morrow, 2010).

be there for that person to develop anorexia.”

One of those ingredients is genetics. Twenty years ago, the Price Foundation sponsored a project that collected DNA samples from thousands of people with eating disorders, their families, and control participants. That data, along with information from the 2006 Swedish Twin Study, suggests that anorexia is highly heritable. “Genes play a substantial role in liability to this illness,” says Cindy Bulik, Ph.D., a professor of psychiatry and director of the University of North Carolina’s Eating Disorders Program. And while no one has yet found a specific anorexia gene, researchers are focusing on an area of chromosome 1 that shows important gene linkages.
Certain personality traits associated with anorexia are probably heritable as well. “Anxiety, inhibition, obsessionality, and perfectionism seem to be present in families of people with an eating disorder,” explains Walter Kaye, M.D., who directs the Eating Disorders Treatment and Research Program at the University of California-San Diego. Another ingredient is neurobiology—literally, the way your brain is structured and how it works. Dr. Kaye’s team at UCSD uses fMRI technology to map blood flow in people’s brains as they think of or perform a task. In one study, Kaye and his colleagues looked at the brains of people with anorexia, people recovered from anorexia, and people who’d never had an eating disorder as they played a gambling game. Participants were asked to guess a number and were rewarded for correct guesses with money or “punished” for incorrect or no guesses by losing money.
Participants in the control group responded to wins and losses by “living in the moment,” wrote researchers: “That is, they made a guess and then moved on to the next task.” But people with anorexia, as well as people who’d recovered from anorexia, showed greater blood flow to the dorsal caudate, an area of the brain that helps link actions and their outcomes, as well as differences in their brains’ dopamine pathways. “People with anorexia nervosa do not live in the moment,” concluded Kaye. “They tend to have exaggerated and obsessive worry about the consequences of their behaviors, looking for rules when there are none, and they are overly concerned about making mistakes.” This study was the first to show altered pathways in the brain even in those recovered from anorexia, suggesting that inherent differences in the brain’s architecture and signaling systems help trigger the illness in the first place.
Food Is Medicine
Some of the best news to come out of research on anorexia is a new therapy aimed at kids and teens. Family-based treatment (FBT), also known as the Maudsley approach, was developed at the Maudsley Hospital in London by Ivan Eisler and Christopher Dare, family therapists who watched nurses on the inpatient eating-disorders unit get patients to eat by sitting with them, talking to them, rubbing their backs, and supporting them. Eisler and Dare wondered how that kind of effective encouragement could be used outside the hospital.
Their observations led them to develop family-based treatment, or FBT, a three-phase treatment for teens and young adults that sidesteps the debate on etiology and focuses instead on recovery. “FBT is agnostic on cause,” says Dr. Le Grange. During phase one, families (usually parents) take charge of a child’s eating, with a goal of fully restoring weight (rather than get to the “90 percent of ideal body weight” many programs use as a benchmark). In phase two, families gradually transfer responsibility for eating back to the teen. Phase three addresses other problems or issues related to normal adolescent development, if there are any.
FBT is a pragmatic approach that recognizes that while people with anorexia are in the throes of acute malnourishment, they can’t choose to eat. And that represents one of the biggest shifts in thinking about eating disorders. The DSM-IV, the most recent “bible” of psychiatric treatment, lists as the first symptom of anorexia “a refusal to maintain body weight at or above a minimally normal weight for age and height.” That notion of refusal is key to how anorexia has been seen, and treated, in the past: as a refusal to eat or gain weight. An acting out. A choice. Which makes sense within the psychodynamic model of cause.
But it doesn’t jibe with the research, which suggests that anorexia is more of an inability to eat than a refusal. Forty-five years ago, Aryeh Routtenberg, then (and still) a professor of psychology at Northwestern University, discovered that when he gave rats only brief daily access to food but let them run as much as they wanted on wheels, they would gradually eat less and less, and run more and more. In fact, they would run without eating until they died, a paradigm Routtenberg called activity-based anorexia (ABA). Rats with ABA seemed to be in the grip of a profound physiological imbalance, one that overrode the normal biological imperatives of hunger and self-preservation. ABA in rats suggests that however it starts, once the cycle of restricting and/or compulsive exercising passes a certain threshold, it takes on a life of its own. Self-starvation is no longer (if it ever was) a choice, but a compulsion to the death.
That’s part of the thinking in FBT. Food is the best medicine for people with anorexia, but they can’t choose to eat. They need someone else to make that choice for them. Therapists don’t sit at the table with patients, but parents do. And parents love and know their children. Like the nurses at the Maudsley Hospital, they find ways to get kids to eat. In a sense, what parents do is outshout the anorexia “voice” many sufferers report hearing, a voice in their heads that tells them not to eat and berates them when they do. Parents take the responsibility for making the choice to eat away from the sufferer, who may insist she’s choosing not to eat but who, underneath the illness, is terrified and hungry.
The best aspect of FBT is that it works. Not for everyone, but for the majority of kids and teens. Several randomized controlled studies of FBT and “treatment as usual” (talk therapy without pressure to eat) show recovery rates of 80 to 90 percent with FBT—a huge improvement over previous recovery rates. A study at the University of Chicago is looking at adapting the treatment for young adults; early results are promising.
The most challenging aspect of FBT is that it’s hard to find. Relatively few therapists in the U.S. are trained in the approach. When our daughter got sick, my husband and I couldn’t find a local FBT therapist. So we cobbled together a team that included our pediatrician, a therapist, and lots of friends who supported our family through the grueling work of re-feeding our daughter. Today she’s a healthy college student with friends, a boyfriend, career goals, and a good relationship with us.
A few years ago, Dr. Le Grange and his research partner, Dr. James Lock of Stanford, created a training institute that certifies a handful of FBT therapists each year. (For a list of FBT providers, visit the Maudsley Parents website.) It’s a start. But therapists are notoriously slow to adopt new treatments, and FBT is no exception. Some therapists find FBT controversial because it upends the conventional view of eating disorders and treatments. Some cling to the psychodynamic view of eating disorders despite the lack of evidence. Still, many in the field have at least heard of FBT and Kaye’s neurobiological findings, even if they don’t believe in them yet.
Change comes slowly. But it comes.
* * *
Harriet Brown teaches magazine journalism at the S.I. Newhouse School of Public Communications in Syracuse, New York. Her latest book is Brave Girl Eating: A Family’s Struggle with Anorexia (William Morrow, 2010).

How pregnant are you? Let’s find out

There’s an old saying: You can’t be a little bit pregnant. Pregnancy is what you might call a binary condition – you either are with child, or you’re not. Home pregnancy tests embody this thinking. You pee on the end of a stick, and three minutes later you either do or do not see a line in the results window. Congratulations, you’re expecting!

Biologically, of course, things are a bit more complicated. Pregnancy tests check for the presence of a particular protein, human chorionicgonadotropin (hCG), that is also elevated in women with breast and ovarian cancers. As a result, it’s sometimes useful to be able to quantify the levels of hCG – or any other so-called “biomarker” – with a bit more precision. A new diagnostic device, developed by a team of Texas researchers and described in the journal Nature Communications, enables precisely that.
The team developed what’s called a microfluidic device, a circuit of tiny channels etched into glass (or sometimes plastic or a rubber polymer) that enable researchers to run chemical assays on tiny volumes of sample. That’s helpful when the sample is particularly precious or hard to come by – a drop of blood from a newborn baby, say.
Microfluidic devices, sometimes called “lab-on-a-chip” devices (because they resemble computer chips in both design and size), are popular in both drug development companies and research laboratories, as well as in the clinic. Their reduced volumes and size mean they use less reagent volumes (making them relatively inexpensive) and produce less waste. They are also faster and higher throughput than many traditional experiments, and are easily automated.
The downside is in the data output. To read the results of a microfluidic assay, researchers generally need some large and expensive piece of hardware that can, for instance, interrogate the chip with a laser to measure fluorescence intensity. That requirement isn’t a problem for most research labs, but it does reduce the likelihood that the technology can be adopted by your general practitioner. And it makes the development of microfluidics-based home tests, analogous to a home pregnancy kit, all but impossible.(*)
To circumvent these problems, the Texas team used a clever “SlipChip” design. A SlipChip is a microfluidic device formed by overlaying two glass plates, whose channels can form either of two flow paths depending on the position of the top plate relative to the bottom. In one configuration, the channels flow left-to-right; in the other (that is, after sliding or “slipping” the top plate), they flow bottom-to-top. Samples and reagents are loaded in one configuration, and the chip is “slipped” to start the readout process.
The SlipChip design
Source: Nat. Commun. 3:1283 doi: 10.1038/ncomms2292 (2012).
Here’s how the authors describe it:

In the SlipChip, two pieces of glass etched with microfluidic wells and channels are assembled together in the presence of mineral oil. A fluidic path is formed when the two plates aligned in a specific configuration. Samples or reagents are preloaded through drilled holes using a pipette, and the top plate is then moved relative to the bottom plate to enable the diffusion and reaction of samples or reagents.

This video shows how it works.

The team calls its device a “volumetric bar-chart chip,” or V-Chip. The V-Chip runs what’s called an ELISA (enzyme linked immunosorbent assay), which is the gold standard in biomarker quantitation tests. Normally ELISAs are read with some sort of instrument that can measure either color, fluorescence, or chemiluminescence. The V-Chip is far simpler (albeit, less quantitative).

It uses an enzyme called catalase to degrade hydrogen peroxide into oxygen gas in volumes proportional to the molecule of interest – in this case, hCG. That gas, in turn, forces a column of red dye upwards to a height determined by the hCG concentration. (See the V-Chip in action here.) The result is a easy to read, microfluidic bar graph, with the height of each bar indicating not only if a woman is pregnant, but just how much hCG is in her urine. In a comparison against a commercial home pregnancy test, the V-Chip was more sensitive at low hCG concentrations, and more accurate at very high concentrations.
The V-Chip’s design is flexible, the authors note, and can be used to test either a large number of samples for a single molecule (as might be done in a clinical trial) or a single sample for multiple molecules, as in cancer screening. The current design allows as many as 50 parallel fluidic channels, meaning up to 50 molecules could be tested in parallel. In one experiment, the team used a six-channel design to test a panel of breast cancer cell lines for the abundance of three proteins (estrogen receptor, progesterone receptor, and human epidermal growth factor receptor) commonly found on breast cancer cells.
The simplicity of the test means it should be possible to design a device that can be used at home or in a doctor’s office. It is cheap, fast, and requires no special hardware. That means it could be used in areas lacking access to top-shelf medical care. It could even be used in the absence of a physician altogether. “The bar chart could be captured as an image using a smart phone, similar to a barcode reader and transmitted to a cloud computer for instant medical suggestions in the future,” the authors write. Now, how cool would that be?(*) That’s not entirely true. Harvard researcher George Whitesides has figured out a way to print microfluidic circuits onto paper, resulting in very simple and inexpensive designs. Boston-based Diagnostics For All is developing such tests for use in third world countries. 

Making Light in Electronics

By DXS Physics Editor Matthew Francis 

A while back, I wrote about one of the most common ways of making electric light: fluorescent bulbs. Understanding fluorescent lights requires quantum mechanics! While a lot of quantum physics seems pretty removed from our daily lives, it’s essential to most of our modern technology. In fact, reading what I’m writing requires quantum mechanics, since you are using a computer (maybe a handheld computer like an iPad or smart phone, but it’s still a computer) or a printout from a computer.

Modern electronics, including computers and phones, depend on semiconductors. Conductors (like the copper wire in power cords) let electricity flow easily, but semiconductors conduct electricity more reluctantly—but that very reluctance lets us control the flow. While they can’t sustain large currents like conductors can, we can tinker with the chemistry of semiconductors to make them conduct electricity in very precise ways. One of those ways lets semiconductor devices make light: those are known as light-emitting diodes, or LEDs.

You likely have many LEDs in your home: they’re common as indicator lights on appliances, and you might even have LED light bulbs. While they’re pretty expensive right now, the price of LED lights is getting lower all the time, and they have major advantages over both incandescent (old-style) light bulbs and fluorescents. They won’t burn out even as quickly as fluorescent lights (themselves longer-lived than incandescents), and consume less energy. Since they are based on solids rather than gases, they’re not going to break easily, either! But how do they work?

The Electrons in the Band

When I described fluorescent lights in my earlier post, I described how atoms have distinct energy levels inside them, and light is produced when electrons move between those energy levels. Fluorescent lights use gases (generally mercury vapor), so the atoms are relatively widely separated. In solids, including semiconductors, atoms are tightly packed together, forming bonds that don’t break without high pressures or temperatures. In fact, they may also share electrons with each other; a particularly dramatic example of this is in metals, where the electrons in the highest energy levels of the atoms all form a gas that surrounds the atoms. That’s why metals are such good conductors—a little push from a battery or other power source makes those electrons flow in one direction (on average at least), much as a fan creates currents in the air.

Semiconductors are a bit more complicated: their electrons are loosely bound, but still stuck to their host atoms. The way physicists understand this is something known as the band model: just like atoms have energy levels, solids have energy bands. Low energies correspond to electrons stuck to their atoms, which can’t leave; we call these closed shell electrons (for reasons that aren’t important for this particular post). Moderate energies are known as valence electrons, which stay put ordinarily, but can be persuaded to move if given the right incentive. Finally, high energies are conduction electrons, which aren’t tied to a particular atom at all; as their name suggests, they are the ones that carry electric current.

Whether a solid conducts electricity depends on its band structure, and the size of the energy barrier in between the bands, which is called a gap. Large gaps require large energies for electrons to jump them, while smaller gaps are more easily jumped. Conductors have negligible gaps between their valence and conduction bands, while insulators have huge gaps. Semiconductors lie in between; adding extra atoms to a semiconductor can make the gap smaller (a process known as “doping”, which sometimes makes describing it unintentionally funny).

Cars and Roads and Electrons

At low temperatures, semiconductors may not conduct electricity at all, since no electrons can jump the gap into the conduction band. Either warming them up a bit or applying an external electric current gives the electrons the energy they need to move into the conduction band.

I was pondering analogies about band structures to help us understand them, and thought of this one based on cars and roads. Think of closed shells as like parking spaces along a road: cars (which stand in for electrons) are stationary. Valence bands are the slow lane, which is clogged with traffic, so the cars technically can move, but don’t. The conduction bands are fast lanes: cars can really zip, but there’s a traffic barrier between the slow lane and fast lane. (That barrier is the weakest part of my analogy, so remember that we should be thinking of a barrier as something that can be traversed under some conditions but not others.)

One more complication: there are two types of semiconductors, known as n-type and p-type. In n-type, just a few electrons (cars) have access to the conduction band (fast lane) at a time, but in p-type, enough electrons get in to leave holes in the valence band. Applying a current to the semiconductor shifts another valence electron into the hole, but that leaves another hole, and so forth…so it looks like the hole is moving! In fact, physicists refer to this as “hole conduction”, which also sounds odd if you’re not used to it.

Now we’re finally ready to understand LEDs. If you join an n-type semiconductor to a p-type semiconductor, you make something known as a diode. (The prefix di- refers to the number two. If you join three semiconductors, you get a transistor of either the pnp or npn types, depending on the order you use.) The bands (lanes) don’t line up perfectly at the junction: the conduction band in the n-type is generally only slightly higher than the valence band of the p-type, so just a little nudge is needed to move electrons across. This means when they reach the junction between the materials, electrons from the n-type semiconductor can fill the holes on the p-type, which is a decrease in energy. Just as in individual atoms, moving from a higher energy level to a lower energy level makes a photon—and that’s where the LE in the D comes from!

LEDs tend to produce very pure colors, rather than the mixture of colors our eyes perceive as white light. To create LED light bulbs, generally blue LEDs are coated with a phosphorescent material, much like the kind used in fluorescent bulbs. Unlike fluorescents, though, there’s no gas involved, and less heat is lost (though there is still a little bit). Together these factors make LED light bulbs longer-lasting and more efficient even than fluorescents, though currently they are far more expensive.

Despite how common LEDs and other semiconductors are, they’re considered fairly advanced physics. But guess what: if I did my job right, you should understand LED physics now! What is often thought of as “advanced” is really everyday science, and it’s a part of how quantum mechanics (with all its electrons and fascinating interactions on the microscopic level) has helped create our modern world.

Two Science Online 2012 sessions for your consideration


Tomorrow, I head for North Carolina to attend Science Online 2012. I attended last year as an an information sponge and observer who knew no one and experienced some highlights and lowlights. This year, I’m attending as a participant and as a moderator of two sessions. The first session, on Thursday afternoon, is with Deborah Blum, and we’ll be leading a discussion about how and when to include basic science in health and medical writing without distracting the reader. The second session I’m moderating is with Maia Szalavitz, and we’ll be talking about whether or not it’s possible to write in health and medicine as an advocate and still be even-handed. Session descriptions are below, as are the topics that we’ll be tossing around for discussion.


Thursday, 2:45 p.m.: The basic science behind the medical research: Where to find it, how and when to use it. 

Sometimes, a medical story makes no sense without the context of the basic science–the molecules, cells, and processes that led to the medical results. At other times, inclusion of the basic science can simply enhance the story. How can science writers, especially without specific training in science, find, understand, and explain that context? As important, when should they use it? The answers to the second question can depend on publishing context, intent, and word count. This session will involve moderators with experience incorporating basic science information into medically based pieces with their insights into the whens and whys of using it. The session will also include specific examples of what the moderators and audience have found works and doesn’t work from their own writing.

Deborah and I have been talking about some issues we’d like to raise for discussion. The possibilities are expansive. Some highlights:

  • Scientific explanation (and understanding) is the foundation for the best science writing. In fact, if the writer doesn’t understand the science, he or she may miss the most important part of the story. But we worry that pausing to explain can slow a story down or disrupt the flow. In print, writers deal with this by condensing and simplifying explanations and also by trying to make them lively and vivid, such as by use of analogy. But online, we use hotlinks as often if not more often for the same purpose. 
  • Reaching a balance between links and prose can be a difficult task. Another possible pitfall is writing an explanation that’s more about teaching ourselves than it is about informing a reader sufficiently for story comprehension. How many writers run into that problem?
  • On-line the temptation is to do the barest explanation and the link to the fuller account, but that approach has pros and cons. More information is available to the reader and the sourcing is transparent. But how often do readers follow those links – and how often do they return? Issues with links include that they are not necessarily evergreen or can lose reader (can be exit portal), or that the reader may not use them at all, thus losing some of the story’s relevant information.
  • A reader may actually learn more from a print story where there are no built-in escape clauses. So how does the on-line science writer best construct a story that illuminates the subject? Are readers learning as much for our work as they do from a print version? (And there’s that age-old question of, Are we here to teach or to inform?)
  • Are we diminishing our own craft if we use links to let others tell the story for us? If we simply link out rather than working to supply an accessible explanation, negatives could include not pushing ourselves as writers and not expanding our own knowledge base, both essential to our craft.
  • How much do we actually owe our readers here? How much work should we expect them to do?
  • What are some ways to address issues of flow, balance, clarity? One possibility is, of course, expert quotes. Twitter is buzzing with scientists, many of whom likely would be pleased to explain a concept or brainstorm about it. (I’ve helped people who have “crowdsourced” in this way for a story, just providing an understandable, basic explanation for something complex).
  • Deborah and I are considering a challenge for the audience with a couple of basic science descriptives, to define them for a non-expert audience without using typical hackneyed phrases. Ideas for this challenge are welcome.
  • We also will feature some examples from our own work in which we think we bollixed up something in trying to explain it (overexplained or did it more for our own understanding than the reader’s) and examples from our own or others’ work of good accessible writing explaining a basic concept. We particularly want to show some explanations of quite complicated concepts–some that worked, some that didn’t. Suggestions for these are welcome!
  • Finally, when we do use links in our online writing– what consitutes a quality link?
———————————————
Saturday, 10:45 a.m.: Advocacy in medical blogging/communication. Can you be an advocate and still be fair?
There is already a session on how reporting facts on controversial topics can lead to accusations of advocacy. But what if you *are* an avowed advocate in a medical context, either as a person with a specific condition (autism, multiple sclerosis, cancer, heart disease) or an ally? How can you, as a self-advocate or ally of an advocate, still retain credibility–and for what audience?

The genesis of this session was my experience in the autism community. I’m an advocate of neurodiversity, the basic premise of which is that people of all neurologies have potential that should be sought, emphasized, and nurtured over their disabilities. Maia, the co-moderator of our session, has her own story of advocacy to tell as a writer about pain, pain medication, mental health, and addiction. 


Either of these topics is controversial, and when you’ve put yourself forward as an advocate, how can you also present as a trustworthy voice on the subject? Maia and I will lead a discussion that will hit, among other things, on the following topics that we hope will lead to a vigorous exchange and input from people whose advocacy is in other arenas:

  • Can stating facts or scientific findings themselves lead to a perception of advocacy? Maia’s experience is, for example, about observing that heroin doesn’t addict everyone who tries it. My example is about noting the facts from research studies that have identified no autism-vaccine link.
  • Any time either of us talks about vaccines or medications for mental health, we’ve run into accusations of being a “Big Pharma tool” or with worse terminology. What response do such accusations require, and what constitutes a conflict of interest here? What is the level of corruption of data that’s linked to pharma involvement? If they are the only possible source of funding for particular studies…do we ignore their data completely?
  • We both agree that having an advocacy bias seems to strengthen our skeptical thinking skills, that it leads us to dig into data with an attitude of looking for facts and going beyond the conventional wisdom in a way that someone less invested might not do. Would audience members agree?
  • In keeping with that, are advocates in fact in some ways more willing to acknowledge complexities and grey areas rather than reducing every situation to black and white?
  • We also want to talk about how the passion of advocacy can lead to a level of expertise that may not be as easily obtained without some bias.
  • That said, another issue that then arises is, How do you grapple with confirmation bias? We argue that you have to consciously be ready to shift angle and conclusions when new information drives you that way–just as a scientist should.
  • One issue that has come to the forefront lately is the idea of false equivalence in reporting. Does being an advocate lead to less introduction of false equivalence?
  • We argue that you may not be objective but that you can still be fair–and welcome discussion about that assertion.
  • And as Deborah and I are doing, we’re planning a couple of challenge questions for discussants to get things moving and to produce some examples of our own when we let our bias interfere too much and when we felt that we remained fair.

—————————————————
The entire conference agenda looks so delicious, so full of moderators and session leaders whom I admire, people I know will have insights and new viewpoints for me. The sheer expanse of choice has left me as-yet unable to select for myself which sessions I will attend. If you’re in the planning stages and see something you like for either of these sessions, please join us and…bring your discussion ideas! 


See you in NC.